00:00:00.000 Started by upstream project "autotest-per-patch" build number 132288 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.146 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.147 The recommended git tool is: git 00:00:00.147 using credential 00000000-0000-0000-0000-000000000002 00:00:00.151 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.192 Fetching changes from the remote Git repository 00:00:00.193 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.231 Using shallow fetch with depth 1 00:00:00.231 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.231 > git --version # timeout=10 00:00:00.265 > git --version # 'git version 2.39.2' 00:00:00.265 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.280 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.280 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.151 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.164 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.176 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.176 > git config core.sparsecheckout # timeout=10 00:00:06.188 > git read-tree -mu HEAD # timeout=10 00:00:06.204 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.223 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.223 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.339 [Pipeline] Start of Pipeline 00:00:06.352 [Pipeline] library 00:00:06.353 Loading library shm_lib@master 00:00:06.353 Library shm_lib@master is cached. Copying from home. 00:00:06.370 [Pipeline] node 00:00:06.377 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.378 [Pipeline] { 00:00:06.387 [Pipeline] catchError 00:00:06.387 [Pipeline] { 00:00:06.395 [Pipeline] wrap 00:00:06.401 [Pipeline] { 00:00:06.410 [Pipeline] stage 00:00:06.412 [Pipeline] { (Prologue) 00:00:06.606 [Pipeline] sh 00:00:06.896 + logger -p user.info -t JENKINS-CI 00:00:06.911 [Pipeline] echo 00:00:06.912 Node: CYP9 00:00:06.919 [Pipeline] sh 00:00:07.220 [Pipeline] setCustomBuildProperty 00:00:07.234 [Pipeline] echo 00:00:07.235 Cleanup processes 00:00:07.241 [Pipeline] sh 00:00:07.527 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.528 62153 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.542 [Pipeline] sh 00:00:07.830 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.830 ++ grep -v 'sudo pgrep' 00:00:07.830 ++ awk '{print $1}' 00:00:07.830 + sudo kill -9 00:00:07.830 + true 00:00:07.843 [Pipeline] cleanWs 00:00:07.852 [WS-CLEANUP] Deleting project workspace... 00:00:07.852 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.859 [WS-CLEANUP] done 00:00:07.863 [Pipeline] setCustomBuildProperty 00:00:07.878 [Pipeline] sh 00:00:08.168 + sudo git config --global --replace-all safe.directory '*' 00:00:08.257 [Pipeline] httpRequest 00:00:08.591 [Pipeline] echo 00:00:08.594 Sorcerer 10.211.164.20 is alive 00:00:08.605 [Pipeline] retry 00:00:08.607 [Pipeline] { 00:00:08.624 [Pipeline] httpRequest 00:00:08.629 HttpMethod: GET 00:00:08.629 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.630 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.649 Response Code: HTTP/1.1 200 OK 00:00:08.649 Success: Status code 200 is in the accepted range: 200,404 00:00:08.650 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.759 [Pipeline] } 00:00:13.776 [Pipeline] // retry 00:00:13.784 [Pipeline] sh 00:00:14.072 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.091 [Pipeline] httpRequest 00:00:14.490 [Pipeline] echo 00:00:14.491 Sorcerer 10.211.164.20 is alive 00:00:14.499 [Pipeline] retry 00:00:14.501 [Pipeline] { 00:00:14.510 [Pipeline] httpRequest 00:00:14.514 HttpMethod: GET 00:00:14.514 URL: http://10.211.164.20/packages/spdk_8c4dec1aa19c1027d20b9b379fec9fe92b230f62.tar.gz 00:00:14.515 Sending request to url: http://10.211.164.20/packages/spdk_8c4dec1aa19c1027d20b9b379fec9fe92b230f62.tar.gz 00:00:14.537 Response Code: HTTP/1.1 200 OK 00:00:14.538 Success: Status code 200 is in the accepted range: 200,404 00:00:14.538 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8c4dec1aa19c1027d20b9b379fec9fe92b230f62.tar.gz 00:01:23.316 [Pipeline] } 00:01:23.335 [Pipeline] // retry 00:01:23.343 [Pipeline] sh 00:01:23.675 + tar --no-same-owner -xf spdk_8c4dec1aa19c1027d20b9b379fec9fe92b230f62.tar.gz 00:01:26.988 [Pipeline] sh 00:01:27.276 + git -C spdk log --oneline -n5 00:01:27.276 8c4dec1aa nvmf: Add no_metadata option to nvmf_subsystem_add_ns 00:01:27.276 e029afccb nvmf: Get metadata config by not bdev but bdev_desc 00:01:27.276 f22dc589c bdevperf: Add no_metadata option 00:01:27.276 7e2a20e14 bdevperf: Get metadata config by not bdev but bdev_desc 00:01:27.276 fb5ca6d93 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:01:27.288 [Pipeline] } 00:01:27.302 [Pipeline] // stage 00:01:27.312 [Pipeline] stage 00:01:27.315 [Pipeline] { (Prepare) 00:01:27.334 [Pipeline] writeFile 00:01:27.351 [Pipeline] sh 00:01:27.640 + logger -p user.info -t JENKINS-CI 00:01:27.655 [Pipeline] sh 00:01:27.941 + logger -p user.info -t JENKINS-CI 00:01:27.955 [Pipeline] sh 00:01:28.243 + cat autorun-spdk.conf 00:01:28.243 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.243 SPDK_TEST_NVMF=1 00:01:28.243 SPDK_TEST_NVME_CLI=1 00:01:28.243 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.243 SPDK_TEST_NVMF_NICS=e810 00:01:28.243 SPDK_TEST_VFIOUSER=1 00:01:28.243 SPDK_RUN_UBSAN=1 00:01:28.243 NET_TYPE=phy 00:01:28.252 RUN_NIGHTLY=0 00:01:28.258 [Pipeline] readFile 00:01:28.287 [Pipeline] withEnv 00:01:28.289 [Pipeline] { 00:01:28.303 [Pipeline] sh 00:01:28.593 + set -ex 00:01:28.593 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:28.593 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.593 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.593 ++ SPDK_TEST_NVMF=1 00:01:28.593 ++ SPDK_TEST_NVME_CLI=1 00:01:28.593 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.593 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.593 ++ SPDK_TEST_VFIOUSER=1 00:01:28.593 ++ SPDK_RUN_UBSAN=1 00:01:28.593 ++ NET_TYPE=phy 00:01:28.593 ++ RUN_NIGHTLY=0 00:01:28.593 + case $SPDK_TEST_NVMF_NICS in 00:01:28.593 + DRIVERS=ice 00:01:28.593 + [[ tcp == \r\d\m\a ]] 00:01:28.593 + [[ -n ice ]] 00:01:28.593 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.593 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:28.593 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:28.593 rmmod: ERROR: Module irdma is not currently loaded 00:01:28.593 rmmod: ERROR: Module i40iw is not currently loaded 00:01:28.593 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:28.593 + true 00:01:28.593 + for D in $DRIVERS 00:01:28.593 + sudo modprobe ice 00:01:28.593 + exit 0 00:01:28.605 [Pipeline] } 00:01:28.621 [Pipeline] // withEnv 00:01:28.627 [Pipeline] } 00:01:28.641 [Pipeline] // stage 00:01:28.652 [Pipeline] catchError 00:01:28.653 [Pipeline] { 00:01:28.667 [Pipeline] timeout 00:01:28.667 Timeout set to expire in 1 hr 0 min 00:01:28.670 [Pipeline] { 00:01:28.684 [Pipeline] stage 00:01:28.687 [Pipeline] { (Tests) 00:01:28.701 [Pipeline] sh 00:01:28.992 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.992 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.992 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.992 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.992 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.992 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.992 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.992 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.992 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.992 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.992 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:28.992 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.992 + source /etc/os-release 00:01:28.992 ++ NAME='Fedora Linux' 00:01:28.992 ++ VERSION='39 (Cloud Edition)' 00:01:28.992 ++ ID=fedora 00:01:28.992 ++ VERSION_ID=39 00:01:28.992 ++ VERSION_CODENAME= 00:01:28.992 ++ PLATFORM_ID=platform:f39 00:01:28.992 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:28.992 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.992 ++ LOGO=fedora-logo-icon 00:01:28.992 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:28.992 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.992 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:28.992 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.992 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.992 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.992 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:28.992 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.992 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:28.992 ++ SUPPORT_END=2024-11-12 00:01:28.992 ++ VARIANT='Cloud Edition' 00:01:28.992 ++ VARIANT_ID=cloud 00:01:28.992 + uname -a 00:01:28.992 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:28.992 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:32.292 Hugepages 00:01:32.292 node hugesize free / total 00:01:32.292 node0 1048576kB 0 / 0 00:01:32.292 node0 2048kB 0 / 0 00:01:32.292 node1 1048576kB 0 / 0 00:01:32.292 node1 2048kB 0 / 0 00:01:32.292 00:01:32.292 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.292 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:32.292 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:32.292 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:32.292 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:32.292 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:32.292 + rm -f /tmp/spdk-ld-path 00:01:32.292 + source autorun-spdk.conf 00:01:32.292 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.292 ++ SPDK_TEST_NVMF=1 00:01:32.292 ++ SPDK_TEST_NVME_CLI=1 00:01:32.292 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.292 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.292 ++ SPDK_TEST_VFIOUSER=1 00:01:32.292 ++ SPDK_RUN_UBSAN=1 00:01:32.292 ++ NET_TYPE=phy 00:01:32.292 ++ RUN_NIGHTLY=0 00:01:32.292 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.292 + [[ -n '' ]] 00:01:32.292 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.293 + for M in /var/spdk/build-*-manifest.txt 00:01:32.293 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.293 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.293 + for M in /var/spdk/build-*-manifest.txt 00:01:32.293 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.293 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.293 + for M in /var/spdk/build-*-manifest.txt 00:01:32.293 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.293 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.293 ++ uname 00:01:32.293 + [[ Linux == \L\i\n\u\x ]] 00:01:32.293 + sudo dmesg -T 00:01:32.293 + sudo dmesg --clear 00:01:32.293 + dmesg_pid=63151 00:01:32.293 + [[ Fedora Linux == FreeBSD ]] 00:01:32.293 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.293 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.293 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.293 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.293 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.293 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.293 + sudo dmesg -Tw 00:01:32.293 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.293 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.293 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.293 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.293 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.293 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.293 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.293 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.293 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.293 10:41:51 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.293 10:41:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:32.293 10:41:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:32.293 10:41:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:32.293 10:41:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.553 10:41:51 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.553 10:41:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:32.553 10:41:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.553 10:41:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.553 10:41:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.553 10:41:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.553 10:41:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.553 10:41:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.554 10:41:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.554 10:41:51 -- paths/export.sh@5 -- $ export PATH 00:01:32.554 10:41:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.554 10:41:51 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.554 10:41:51 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:32.554 10:41:51 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731663711.XXXXXX 00:01:32.554 10:41:51 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731663711.ZO944q 00:01:32.554 10:41:51 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:32.554 10:41:51 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:32.554 10:41:51 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:32.554 10:41:51 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.554 10:41:51 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.554 10:41:51 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:32.554 10:41:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:32.554 10:41:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.554 10:41:51 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:32.554 10:41:51 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:32.554 10:41:51 -- pm/common@17 -- $ local monitor 00:01:32.554 10:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.554 10:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.554 10:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.554 10:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.554 10:41:51 -- pm/common@21 -- $ date +%s 00:01:32.554 10:41:51 -- pm/common@21 -- $ date +%s 00:01:32.554 10:41:51 -- pm/common@25 -- $ sleep 1 00:01:32.554 10:41:51 -- pm/common@21 -- $ date +%s 00:01:32.554 10:41:51 -- pm/common@21 -- $ date +%s 00:01:32.554 10:41:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663711 00:01:32.554 10:41:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663711 00:01:32.554 10:41:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663711 00:01:32.554 10:41:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663711 00:01:32.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663711_collect-cpu-load.pm.log 00:01:32.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663711_collect-vmstat.pm.log 00:01:32.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663711_collect-cpu-temp.pm.log 00:01:32.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663711_collect-bmc-pm.bmc.pm.log 00:01:33.494 10:41:52 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:33.494 10:41:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.494 10:41:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.494 10:41:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.494 10:41:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.494 Fri Nov 15 09:41:52 AM UTC 2024 00:01:33.495 10:41:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.495 v25.01-pre-205-g8c4dec1aa 00:01:33.495 10:41:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.495 10:41:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.495 10:41:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.495 10:41:52 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:33.495 10:41:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:33.495 10:41:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.495 ************************************ 00:01:33.495 START TEST ubsan 00:01:33.495 ************************************ 00:01:33.495 10:41:53 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:33.495 using ubsan 00:01:33.495 00:01:33.495 real 0m0.001s 00:01:33.495 user 0m0.001s 00:01:33.495 sys 0m0.000s 00:01:33.495 10:41:53 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:33.495 10:41:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.495 ************************************ 00:01:33.495 END TEST ubsan 00:01:33.495 ************************************ 00:01:33.754 10:41:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.754 10:41:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.754 10:41:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.754 10:41:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.754 10:41:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.754 10:41:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.754 10:41:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.754 10:41:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.754 10:41:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:33.754 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:33.754 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.323 Using 'verbs' RDMA provider 00:01:50.166 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:02.390 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:02.961 Creating mk/config.mk...done. 00:02:02.961 Creating mk/cc.flags.mk...done. 00:02:02.961 Type 'make' to build. 00:02:02.961 10:42:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:02.961 10:42:22 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:02.961 10:42:22 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:02.961 10:42:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.961 ************************************ 00:02:02.961 START TEST make 00:02:02.961 ************************************ 00:02:02.961 10:42:22 make -- common/autotest_common.sh@1127 -- $ make -j144 00:02:03.532 make[1]: Nothing to be done for 'all'. 00:02:04.916 The Meson build system 00:02:04.916 Version: 1.5.0 00:02:04.916 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:04.916 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.916 Build type: native build 00:02:04.916 Project name: libvfio-user 00:02:04.916 Project version: 0.0.1 00:02:04.916 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.916 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.916 Host machine cpu family: x86_64 00:02:04.916 Host machine cpu: x86_64 00:02:04.916 Run-time dependency threads found: YES 00:02:04.916 Library dl found: YES 00:02:04.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.916 Run-time dependency json-c found: YES 0.17 00:02:04.916 Run-time dependency cmocka found: YES 1.1.7 00:02:04.916 Program pytest-3 found: NO 00:02:04.916 Program flake8 found: NO 00:02:04.916 Program misspell-fixer found: NO 00:02:04.916 Program restructuredtext-lint found: NO 00:02:04.917 Program valgrind found: YES (/usr/bin/valgrind) 00:02:04.917 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.917 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.917 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.917 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.917 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:04.917 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:04.917 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.917 Build targets in project: 8 00:02:04.917 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:04.917 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:04.917 00:02:04.917 libvfio-user 0.0.1 00:02:04.917 00:02:04.917 User defined options 00:02:04.917 buildtype : debug 00:02:04.917 default_library: shared 00:02:04.917 libdir : /usr/local/lib 00:02:04.917 00:02:04.917 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.176 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.436 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:05.436 [2/37] Compiling C object samples/null.p/null.c.o 00:02:05.436 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:05.436 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:05.436 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:05.436 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:05.436 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:05.436 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:05.436 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:05.436 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:05.436 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:05.436 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:05.436 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:05.436 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:05.436 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:05.436 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:05.436 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:05.436 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:05.436 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:05.436 [20/37] Compiling C object samples/server.p/server.c.o 00:02:05.436 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:05.436 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:05.436 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:05.436 [24/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:05.436 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:05.436 [26/37] Compiling C object samples/client.p/client.c.o 00:02:05.436 [27/37] Linking target samples/client 00:02:05.697 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:05.697 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:05.697 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:05.697 [31/37] Linking target test/unit_tests 00:02:05.697 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:05.697 [33/37] Linking target samples/server 00:02:05.697 [34/37] Linking target samples/lspci 00:02:05.697 [35/37] Linking target samples/null 00:02:05.697 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:05.697 [37/37] Linking target samples/gpio-pci-idio-16 00:02:05.697 INFO: autodetecting backend as ninja 00:02:05.697 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.959 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.221 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:06.221 ninja: no work to do. 00:02:12.823 The Meson build system 00:02:12.823 Version: 1.5.0 00:02:12.823 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:12.823 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:12.823 Build type: native build 00:02:12.823 Program cat found: YES (/usr/bin/cat) 00:02:12.823 Project name: DPDK 00:02:12.823 Project version: 24.03.0 00:02:12.823 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.823 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.823 Host machine cpu family: x86_64 00:02:12.823 Host machine cpu: x86_64 00:02:12.823 Message: ## Building in Developer Mode ## 00:02:12.823 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.823 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.823 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.823 Program python3 found: YES (/usr/bin/python3) 00:02:12.823 Program cat found: YES (/usr/bin/cat) 00:02:12.823 Compiler for C supports arguments -march=native: YES 00:02:12.823 Checking for size of "void *" : 8 00:02:12.823 Checking for size of "void *" : 8 (cached) 00:02:12.823 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.823 Library m found: YES 00:02:12.823 Library numa found: YES 00:02:12.823 Has header "numaif.h" : YES 00:02:12.823 Library fdt found: NO 00:02:12.823 Library execinfo found: NO 00:02:12.823 Has header "execinfo.h" : YES 00:02:12.823 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.823 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.823 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.823 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.823 Run-time dependency openssl found: YES 3.1.1 00:02:12.823 Run-time dependency libpcap found: YES 1.10.4 00:02:12.823 Has header "pcap.h" with dependency libpcap: YES 00:02:12.823 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.823 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.823 Compiler for C supports arguments -Wformat: YES 00:02:12.823 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.823 Compiler for C supports arguments -Wformat-security: NO 00:02:12.823 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.823 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.823 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.823 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.823 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.823 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.823 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.823 Compiler for C supports arguments -Wundef: YES 00:02:12.823 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.823 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.823 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.823 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.823 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.823 Program objdump found: YES (/usr/bin/objdump) 00:02:12.823 Compiler for C supports arguments -mavx512f: YES 00:02:12.823 Checking if "AVX512 checking" compiles: YES 00:02:12.823 Fetching value of define "__SSE4_2__" : 1 00:02:12.823 Fetching value of define "__AES__" : 1 00:02:12.823 Fetching value of define "__AVX__" : 1 00:02:12.823 Fetching value of define "__AVX2__" : 1 00:02:12.823 Fetching value of define "__AVX512BW__" : 1 00:02:12.823 Fetching value of define "__AVX512CD__" : 1 00:02:12.823 Fetching value of define "__AVX512DQ__" : 1 00:02:12.823 Fetching value of define "__AVX512F__" : 1 00:02:12.823 Fetching value of define "__AVX512VL__" : 1 00:02:12.823 Fetching value of define "__PCLMUL__" : 1 00:02:12.823 Fetching value of define "__RDRND__" : 1 00:02:12.823 Fetching value of define "__RDSEED__" : 1 00:02:12.823 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:12.823 Fetching value of define "__znver1__" : (undefined) 00:02:12.823 Fetching value of define "__znver2__" : (undefined) 00:02:12.823 Fetching value of define "__znver3__" : (undefined) 00:02:12.823 Fetching value of define "__znver4__" : (undefined) 00:02:12.823 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.823 Message: lib/log: Defining dependency "log" 00:02:12.823 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.823 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.823 Checking for function "getentropy" : NO 00:02:12.823 Message: lib/eal: Defining dependency "eal" 00:02:12.823 Message: lib/ring: Defining dependency "ring" 00:02:12.823 Message: lib/rcu: Defining dependency "rcu" 00:02:12.823 Message: lib/mempool: Defining dependency "mempool" 00:02:12.823 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.823 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.823 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.823 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.823 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.823 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.823 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:12.823 Compiler for C supports arguments -mpclmul: YES 00:02:12.823 Compiler for C supports arguments -maes: YES 00:02:12.823 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.823 Compiler for C supports arguments -mavx512bw: YES 00:02:12.823 Compiler for C supports arguments -mavx512dq: YES 00:02:12.823 Compiler for C supports arguments -mavx512vl: YES 00:02:12.823 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.823 Compiler for C supports arguments -mavx2: YES 00:02:12.823 Compiler for C supports arguments -mavx: YES 00:02:12.823 Message: lib/net: Defining dependency "net" 00:02:12.823 Message: lib/meter: Defining dependency "meter" 00:02:12.823 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.823 Message: lib/pci: Defining dependency "pci" 00:02:12.823 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.823 Message: lib/hash: Defining dependency "hash" 00:02:12.823 Message: lib/timer: Defining dependency "timer" 00:02:12.823 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.823 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.823 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.823 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.823 Message: lib/power: Defining dependency "power" 00:02:12.823 Message: lib/reorder: Defining dependency "reorder" 00:02:12.823 Message: lib/security: Defining dependency "security" 00:02:12.823 Has header "linux/userfaultfd.h" : YES 00:02:12.823 Has header "linux/vduse.h" : YES 00:02:12.823 Message: lib/vhost: Defining dependency "vhost" 00:02:12.823 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.823 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.823 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.823 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.823 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.823 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.823 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.823 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.823 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.823 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.823 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.823 Configuring doxy-api-html.conf using configuration 00:02:12.823 Configuring doxy-api-man.conf using configuration 00:02:12.823 Program mandb found: YES (/usr/bin/mandb) 00:02:12.823 Program sphinx-build found: NO 00:02:12.824 Configuring rte_build_config.h using configuration 00:02:12.824 Message: 00:02:12.824 ================= 00:02:12.824 Applications Enabled 00:02:12.824 ================= 00:02:12.824 00:02:12.824 apps: 00:02:12.824 00:02:12.824 00:02:12.824 Message: 00:02:12.824 ================= 00:02:12.824 Libraries Enabled 00:02:12.824 ================= 00:02:12.824 00:02:12.824 libs: 00:02:12.824 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.824 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.824 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.824 00:02:12.824 Message: 00:02:12.824 =============== 00:02:12.824 Drivers Enabled 00:02:12.824 =============== 00:02:12.824 00:02:12.824 common: 00:02:12.824 00:02:12.824 bus: 00:02:12.824 pci, vdev, 00:02:12.824 mempool: 00:02:12.824 ring, 00:02:12.824 dma: 00:02:12.824 00:02:12.824 net: 00:02:12.824 00:02:12.824 crypto: 00:02:12.824 00:02:12.824 compress: 00:02:12.824 00:02:12.824 vdpa: 00:02:12.824 00:02:12.824 00:02:12.824 Message: 00:02:12.824 ================= 00:02:12.824 Content Skipped 00:02:12.824 ================= 00:02:12.824 00:02:12.824 apps: 00:02:12.824 dumpcap: explicitly disabled via build config 00:02:12.824 graph: explicitly disabled via build config 00:02:12.824 pdump: explicitly disabled via build config 00:02:12.824 proc-info: explicitly disabled via build config 00:02:12.824 test-acl: explicitly disabled via build config 00:02:12.824 test-bbdev: explicitly disabled via build config 00:02:12.824 test-cmdline: explicitly disabled via build config 00:02:12.824 test-compress-perf: explicitly disabled via build config 00:02:12.824 test-crypto-perf: explicitly disabled via build config 00:02:12.824 test-dma-perf: explicitly disabled via build config 00:02:12.824 test-eventdev: explicitly disabled via build config 00:02:12.824 test-fib: explicitly disabled via build config 00:02:12.824 test-flow-perf: explicitly disabled via build config 00:02:12.824 test-gpudev: explicitly disabled via build config 00:02:12.824 test-mldev: explicitly disabled via build config 00:02:12.824 test-pipeline: explicitly disabled via build config 00:02:12.824 test-pmd: explicitly disabled via build config 00:02:12.824 test-regex: explicitly disabled via build config 00:02:12.824 test-sad: explicitly disabled via build config 00:02:12.824 test-security-perf: explicitly disabled via build config 00:02:12.824 00:02:12.824 libs: 00:02:12.824 argparse: explicitly disabled via build config 00:02:12.824 metrics: explicitly disabled via build config 00:02:12.824 acl: explicitly disabled via build config 00:02:12.824 bbdev: explicitly disabled via build config 00:02:12.824 bitratestats: explicitly disabled via build config 00:02:12.824 bpf: explicitly disabled via build config 00:02:12.824 cfgfile: explicitly disabled via build config 00:02:12.824 distributor: explicitly disabled via build config 00:02:12.824 efd: explicitly disabled via build config 00:02:12.824 eventdev: explicitly disabled via build config 00:02:12.824 dispatcher: explicitly disabled via build config 00:02:12.824 gpudev: explicitly disabled via build config 00:02:12.824 gro: explicitly disabled via build config 00:02:12.824 gso: explicitly disabled via build config 00:02:12.824 ip_frag: explicitly disabled via build config 00:02:12.824 jobstats: explicitly disabled via build config 00:02:12.824 latencystats: explicitly disabled via build config 00:02:12.824 lpm: explicitly disabled via build config 00:02:12.824 member: explicitly disabled via build config 00:02:12.824 pcapng: explicitly disabled via build config 00:02:12.824 rawdev: explicitly disabled via build config 00:02:12.824 regexdev: explicitly disabled via build config 00:02:12.824 mldev: explicitly disabled via build config 00:02:12.824 rib: explicitly disabled via build config 00:02:12.824 sched: explicitly disabled via build config 00:02:12.824 stack: explicitly disabled via build config 00:02:12.824 ipsec: explicitly disabled via build config 00:02:12.824 pdcp: explicitly disabled via build config 00:02:12.824 fib: explicitly disabled via build config 00:02:12.824 port: explicitly disabled via build config 00:02:12.824 pdump: explicitly disabled via build config 00:02:12.824 table: explicitly disabled via build config 00:02:12.824 pipeline: explicitly disabled via build config 00:02:12.824 graph: explicitly disabled via build config 00:02:12.824 node: explicitly disabled via build config 00:02:12.824 00:02:12.824 drivers: 00:02:12.824 common/cpt: not in enabled drivers build config 00:02:12.824 common/dpaax: not in enabled drivers build config 00:02:12.824 common/iavf: not in enabled drivers build config 00:02:12.824 common/idpf: not in enabled drivers build config 00:02:12.824 common/ionic: not in enabled drivers build config 00:02:12.824 common/mvep: not in enabled drivers build config 00:02:12.824 common/octeontx: not in enabled drivers build config 00:02:12.824 bus/auxiliary: not in enabled drivers build config 00:02:12.824 bus/cdx: not in enabled drivers build config 00:02:12.824 bus/dpaa: not in enabled drivers build config 00:02:12.824 bus/fslmc: not in enabled drivers build config 00:02:12.824 bus/ifpga: not in enabled drivers build config 00:02:12.824 bus/platform: not in enabled drivers build config 00:02:12.824 bus/uacce: not in enabled drivers build config 00:02:12.824 bus/vmbus: not in enabled drivers build config 00:02:12.824 common/cnxk: not in enabled drivers build config 00:02:12.824 common/mlx5: not in enabled drivers build config 00:02:12.824 common/nfp: not in enabled drivers build config 00:02:12.824 common/nitrox: not in enabled drivers build config 00:02:12.824 common/qat: not in enabled drivers build config 00:02:12.824 common/sfc_efx: not in enabled drivers build config 00:02:12.824 mempool/bucket: not in enabled drivers build config 00:02:12.824 mempool/cnxk: not in enabled drivers build config 00:02:12.824 mempool/dpaa: not in enabled drivers build config 00:02:12.824 mempool/dpaa2: not in enabled drivers build config 00:02:12.824 mempool/octeontx: not in enabled drivers build config 00:02:12.824 mempool/stack: not in enabled drivers build config 00:02:12.824 dma/cnxk: not in enabled drivers build config 00:02:12.824 dma/dpaa: not in enabled drivers build config 00:02:12.824 dma/dpaa2: not in enabled drivers build config 00:02:12.824 dma/hisilicon: not in enabled drivers build config 00:02:12.824 dma/idxd: not in enabled drivers build config 00:02:12.824 dma/ioat: not in enabled drivers build config 00:02:12.824 dma/skeleton: not in enabled drivers build config 00:02:12.824 net/af_packet: not in enabled drivers build config 00:02:12.824 net/af_xdp: not in enabled drivers build config 00:02:12.824 net/ark: not in enabled drivers build config 00:02:12.824 net/atlantic: not in enabled drivers build config 00:02:12.824 net/avp: not in enabled drivers build config 00:02:12.824 net/axgbe: not in enabled drivers build config 00:02:12.824 net/bnx2x: not in enabled drivers build config 00:02:12.824 net/bnxt: not in enabled drivers build config 00:02:12.824 net/bonding: not in enabled drivers build config 00:02:12.824 net/cnxk: not in enabled drivers build config 00:02:12.824 net/cpfl: not in enabled drivers build config 00:02:12.824 net/cxgbe: not in enabled drivers build config 00:02:12.824 net/dpaa: not in enabled drivers build config 00:02:12.824 net/dpaa2: not in enabled drivers build config 00:02:12.824 net/e1000: not in enabled drivers build config 00:02:12.824 net/ena: not in enabled drivers build config 00:02:12.824 net/enetc: not in enabled drivers build config 00:02:12.824 net/enetfec: not in enabled drivers build config 00:02:12.824 net/enic: not in enabled drivers build config 00:02:12.824 net/failsafe: not in enabled drivers build config 00:02:12.824 net/fm10k: not in enabled drivers build config 00:02:12.824 net/gve: not in enabled drivers build config 00:02:12.824 net/hinic: not in enabled drivers build config 00:02:12.824 net/hns3: not in enabled drivers build config 00:02:12.824 net/i40e: not in enabled drivers build config 00:02:12.824 net/iavf: not in enabled drivers build config 00:02:12.824 net/ice: not in enabled drivers build config 00:02:12.824 net/idpf: not in enabled drivers build config 00:02:12.824 net/igc: not in enabled drivers build config 00:02:12.824 net/ionic: not in enabled drivers build config 00:02:12.824 net/ipn3ke: not in enabled drivers build config 00:02:12.824 net/ixgbe: not in enabled drivers build config 00:02:12.824 net/mana: not in enabled drivers build config 00:02:12.824 net/memif: not in enabled drivers build config 00:02:12.824 net/mlx4: not in enabled drivers build config 00:02:12.824 net/mlx5: not in enabled drivers build config 00:02:12.824 net/mvneta: not in enabled drivers build config 00:02:12.824 net/mvpp2: not in enabled drivers build config 00:02:12.824 net/netvsc: not in enabled drivers build config 00:02:12.824 net/nfb: not in enabled drivers build config 00:02:12.824 net/nfp: not in enabled drivers build config 00:02:12.824 net/ngbe: not in enabled drivers build config 00:02:12.824 net/null: not in enabled drivers build config 00:02:12.824 net/octeontx: not in enabled drivers build config 00:02:12.824 net/octeon_ep: not in enabled drivers build config 00:02:12.824 net/pcap: not in enabled drivers build config 00:02:12.824 net/pfe: not in enabled drivers build config 00:02:12.824 net/qede: not in enabled drivers build config 00:02:12.824 net/ring: not in enabled drivers build config 00:02:12.824 net/sfc: not in enabled drivers build config 00:02:12.824 net/softnic: not in enabled drivers build config 00:02:12.824 net/tap: not in enabled drivers build config 00:02:12.824 net/thunderx: not in enabled drivers build config 00:02:12.824 net/txgbe: not in enabled drivers build config 00:02:12.824 net/vdev_netvsc: not in enabled drivers build config 00:02:12.824 net/vhost: not in enabled drivers build config 00:02:12.824 net/virtio: not in enabled drivers build config 00:02:12.824 net/vmxnet3: not in enabled drivers build config 00:02:12.824 raw/*: missing internal dependency, "rawdev" 00:02:12.824 crypto/armv8: not in enabled drivers build config 00:02:12.824 crypto/bcmfs: not in enabled drivers build config 00:02:12.824 crypto/caam_jr: not in enabled drivers build config 00:02:12.824 crypto/ccp: not in enabled drivers build config 00:02:12.825 crypto/cnxk: not in enabled drivers build config 00:02:12.825 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.825 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.825 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.825 crypto/mlx5: not in enabled drivers build config 00:02:12.825 crypto/mvsam: not in enabled drivers build config 00:02:12.825 crypto/nitrox: not in enabled drivers build config 00:02:12.825 crypto/null: not in enabled drivers build config 00:02:12.825 crypto/octeontx: not in enabled drivers build config 00:02:12.825 crypto/openssl: not in enabled drivers build config 00:02:12.825 crypto/scheduler: not in enabled drivers build config 00:02:12.825 crypto/uadk: not in enabled drivers build config 00:02:12.825 crypto/virtio: not in enabled drivers build config 00:02:12.825 compress/isal: not in enabled drivers build config 00:02:12.825 compress/mlx5: not in enabled drivers build config 00:02:12.825 compress/nitrox: not in enabled drivers build config 00:02:12.825 compress/octeontx: not in enabled drivers build config 00:02:12.825 compress/zlib: not in enabled drivers build config 00:02:12.825 regex/*: missing internal dependency, "regexdev" 00:02:12.825 ml/*: missing internal dependency, "mldev" 00:02:12.825 vdpa/ifc: not in enabled drivers build config 00:02:12.825 vdpa/mlx5: not in enabled drivers build config 00:02:12.825 vdpa/nfp: not in enabled drivers build config 00:02:12.825 vdpa/sfc: not in enabled drivers build config 00:02:12.825 event/*: missing internal dependency, "eventdev" 00:02:12.825 baseband/*: missing internal dependency, "bbdev" 00:02:12.825 gpu/*: missing internal dependency, "gpudev" 00:02:12.825 00:02:12.825 00:02:12.825 Build targets in project: 84 00:02:12.825 00:02:12.825 DPDK 24.03.0 00:02:12.825 00:02:12.825 User defined options 00:02:12.825 buildtype : debug 00:02:12.825 default_library : shared 00:02:12.825 libdir : lib 00:02:12.825 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:12.825 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.825 c_link_args : 00:02:12.825 cpu_instruction_set: native 00:02:12.825 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:12.825 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:12.825 enable_docs : false 00:02:12.825 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.825 enable_kmods : false 00:02:12.825 max_lcores : 128 00:02:12.825 tests : false 00:02:12.825 00:02:12.825 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.825 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:12.825 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.825 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.825 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.825 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.825 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.825 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.825 [7/267] Linking static target lib/librte_kvargs.a 00:02:12.825 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.825 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.825 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.825 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.825 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.825 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.825 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.825 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.825 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.825 [17/267] Linking static target lib/librte_log.a 00:02:12.825 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.825 [19/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.825 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.087 [21/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.087 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.087 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.087 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.087 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.087 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.087 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.087 [28/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.087 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.087 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.087 [31/267] Linking static target lib/librte_pci.a 00:02:13.087 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.087 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.087 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.087 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.087 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.087 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.087 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.347 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.347 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.347 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.347 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.347 [43/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.347 [44/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.347 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.347 [46/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.347 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.347 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.347 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.347 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.347 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.347 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.347 [53/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.347 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.347 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.347 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.347 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.347 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.347 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.347 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.347 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.347 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.347 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.347 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.347 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.347 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.347 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.347 [68/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:13.347 [69/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.348 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.348 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.348 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.348 [73/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.348 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.348 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.348 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.348 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.348 [78/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.348 [79/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.348 [80/267] Linking static target lib/librte_telemetry.a 00:02:13.348 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.348 [82/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.348 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.348 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.348 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.348 [86/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.348 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.348 [88/267] Linking static target lib/librte_ring.a 00:02:13.348 [89/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.348 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.348 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.348 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.348 [93/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.348 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.348 [95/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.348 [96/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.348 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.348 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.348 [99/267] Linking static target lib/librte_timer.a 00:02:13.348 [100/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.348 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.348 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.348 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.348 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.348 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.609 [106/267] Linking static target lib/librte_meter.a 00:02:13.609 [107/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.609 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.609 [109/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.609 [110/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.609 [111/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.609 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.609 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.609 [114/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.609 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.609 [116/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.609 [117/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.609 [118/267] Linking static target lib/librte_cmdline.a 00:02:13.609 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.609 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.609 [121/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.609 [122/267] Linking static target lib/librte_mempool.a 00:02:13.609 [123/267] Linking static target lib/librte_dmadev.a 00:02:13.609 [124/267] Linking static target lib/librte_net.a 00:02:13.609 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.609 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.609 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.609 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.609 [129/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.609 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.609 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.609 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.609 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.609 [134/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.609 [135/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.609 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.609 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.609 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.609 [139/267] Linking static target lib/librte_compressdev.a 00:02:13.609 [140/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.609 [141/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.609 [142/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.609 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.609 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.609 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.609 [146/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.609 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.609 [148/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.609 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.609 [150/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.609 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.609 [152/267] Linking static target lib/librte_rcu.a 00:02:13.609 [153/267] Linking static target lib/librte_power.a 00:02:13.609 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.609 [155/267] Linking target lib/librte_log.so.24.1 00:02:13.609 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.609 [157/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.609 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.609 [159/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.609 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.609 [161/267] Linking static target lib/librte_reorder.a 00:02:13.609 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.609 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.609 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.609 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.610 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.610 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.610 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.610 [169/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.610 [170/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.610 [171/267] Linking static target lib/librte_eal.a 00:02:13.610 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.610 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.610 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.610 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.610 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.610 [177/267] Linking static target lib/librte_security.a 00:02:13.610 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.610 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.610 [180/267] Linking static target lib/librte_mbuf.a 00:02:13.610 [181/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.610 [182/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.610 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.871 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.871 [185/267] Linking target lib/librte_kvargs.so.24.1 00:02:13.871 [186/267] Linking static target drivers/librte_bus_vdev.a 00:02:13.871 [187/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.871 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.871 [189/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.871 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.871 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.871 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.871 [193/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.871 [194/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.871 [195/267] Linking static target lib/librte_hash.a 00:02:13.871 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.871 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.871 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.871 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.871 [200/267] Linking static target drivers/librte_bus_pci.a 00:02:13.871 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.871 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.871 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.871 [204/267] Linking static target drivers/librte_mempool_ring.a 00:02:13.871 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.871 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.871 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.871 [208/267] Linking static target lib/librte_cryptodev.a 00:02:14.132 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.132 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.132 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:14.132 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.132 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.393 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.393 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.393 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.393 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.393 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.393 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:14.393 [220/267] Linking static target lib/librte_ethdev.a 00:02:14.655 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.655 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.655 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.655 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.916 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.916 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.490 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.490 [228/267] Linking static target lib/librte_vhost.a 00:02:16.063 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.974 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.561 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.132 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.132 [233/267] Linking target lib/librte_eal.so.24.1 00:02:25.406 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.406 [235/267] Linking target lib/librte_timer.so.24.1 00:02:25.406 [236/267] Linking target lib/librte_ring.so.24.1 00:02:25.406 [237/267] Linking target lib/librte_dmadev.so.24.1 00:02:25.406 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.406 [239/267] Linking target lib/librte_meter.so.24.1 00:02:25.407 [240/267] Linking target lib/librte_pci.so.24.1 00:02:25.672 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.672 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.672 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.672 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.672 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.672 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:25.672 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:25.672 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.672 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.672 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.672 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.934 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:25.934 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.934 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:25.934 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:25.934 [256/267] Linking target lib/librte_net.so.24.1 00:02:25.934 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:26.195 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.195 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.195 [260/267] Linking target lib/librte_hash.so.24.1 00:02:26.195 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:26.195 [262/267] Linking target lib/librte_security.so.24.1 00:02:26.195 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:26.195 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.195 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.456 [266/267] Linking target lib/librte_power.so.24.1 00:02:26.456 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:26.456 INFO: autodetecting backend as ninja 00:02:26.456 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:29.756 CC lib/log/log.o 00:02:29.756 CC lib/ut_mock/mock.o 00:02:29.756 CC lib/log/log_flags.o 00:02:29.756 CC lib/log/log_deprecated.o 00:02:29.756 CC lib/ut/ut.o 00:02:29.756 LIB libspdk_ut.a 00:02:29.756 LIB libspdk_ut_mock.a 00:02:29.756 LIB libspdk_log.a 00:02:29.756 SO libspdk_ut_mock.so.6.0 00:02:29.756 SO libspdk_ut.so.2.0 00:02:29.756 SO libspdk_log.so.7.1 00:02:29.756 SYMLINK libspdk_ut_mock.so 00:02:29.756 SYMLINK libspdk_ut.so 00:02:29.756 SYMLINK libspdk_log.so 00:02:30.328 CC lib/dma/dma.o 00:02:30.328 CC lib/util/base64.o 00:02:30.328 CC lib/util/bit_array.o 00:02:30.328 CC lib/util/cpuset.o 00:02:30.328 CXX lib/trace_parser/trace.o 00:02:30.328 CC lib/util/crc16.o 00:02:30.328 CC lib/ioat/ioat.o 00:02:30.328 CC lib/util/crc32.o 00:02:30.328 CC lib/util/crc32c.o 00:02:30.328 CC lib/util/crc32_ieee.o 00:02:30.328 CC lib/util/crc64.o 00:02:30.328 CC lib/util/dif.o 00:02:30.328 CC lib/util/fd.o 00:02:30.328 CC lib/util/fd_group.o 00:02:30.328 CC lib/util/file.o 00:02:30.328 CC lib/util/hexlify.o 00:02:30.328 CC lib/util/iov.o 00:02:30.328 CC lib/util/math.o 00:02:30.328 CC lib/util/net.o 00:02:30.328 CC lib/util/pipe.o 00:02:30.328 CC lib/util/strerror_tls.o 00:02:30.328 CC lib/util/string.o 00:02:30.328 CC lib/util/uuid.o 00:02:30.328 CC lib/util/xor.o 00:02:30.328 CC lib/util/zipf.o 00:02:30.328 CC lib/util/md5.o 00:02:30.328 CC lib/vfio_user/host/vfio_user.o 00:02:30.328 CC lib/vfio_user/host/vfio_user_pci.o 00:02:30.328 LIB libspdk_dma.a 00:02:30.328 SO libspdk_dma.so.5.0 00:02:30.328 LIB libspdk_ioat.a 00:02:30.589 SYMLINK libspdk_dma.so 00:02:30.589 SO libspdk_ioat.so.7.0 00:02:30.589 SYMLINK libspdk_ioat.so 00:02:30.589 LIB libspdk_util.a 00:02:30.589 LIB libspdk_vfio_user.a 00:02:30.589 SO libspdk_vfio_user.so.5.0 00:02:30.589 SO libspdk_util.so.10.1 00:02:30.589 SYMLINK libspdk_vfio_user.so 00:02:30.851 SYMLINK libspdk_util.so 00:02:30.851 LIB libspdk_trace_parser.a 00:02:31.113 SO libspdk_trace_parser.so.6.0 00:02:31.113 SYMLINK libspdk_trace_parser.so 00:02:31.113 CC lib/json/json_util.o 00:02:31.113 CC lib/json/json_parse.o 00:02:31.113 CC lib/json/json_write.o 00:02:31.113 CC lib/conf/conf.o 00:02:31.113 CC lib/env_dpdk/env.o 00:02:31.113 CC lib/env_dpdk/memory.o 00:02:31.113 CC lib/rdma_utils/rdma_utils.o 00:02:31.113 CC lib/env_dpdk/pci.o 00:02:31.113 CC lib/env_dpdk/init.o 00:02:31.113 CC lib/env_dpdk/threads.o 00:02:31.113 CC lib/env_dpdk/pci_ioat.o 00:02:31.113 CC lib/vmd/vmd.o 00:02:31.113 CC lib/env_dpdk/pci_virtio.o 00:02:31.113 CC lib/vmd/led.o 00:02:31.113 CC lib/env_dpdk/pci_vmd.o 00:02:31.113 CC lib/idxd/idxd.o 00:02:31.113 CC lib/env_dpdk/pci_idxd.o 00:02:31.113 CC lib/idxd/idxd_user.o 00:02:31.113 CC lib/env_dpdk/pci_event.o 00:02:31.113 CC lib/env_dpdk/sigbus_handler.o 00:02:31.113 CC lib/idxd/idxd_kernel.o 00:02:31.113 CC lib/env_dpdk/pci_dpdk.o 00:02:31.113 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.113 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:31.374 LIB libspdk_conf.a 00:02:31.374 SO libspdk_conf.so.6.0 00:02:31.374 LIB libspdk_json.a 00:02:31.374 LIB libspdk_rdma_utils.a 00:02:31.634 SO libspdk_json.so.6.0 00:02:31.634 SO libspdk_rdma_utils.so.1.0 00:02:31.634 SYMLINK libspdk_conf.so 00:02:31.634 SYMLINK libspdk_json.so 00:02:31.634 SYMLINK libspdk_rdma_utils.so 00:02:31.634 LIB libspdk_idxd.a 00:02:31.895 LIB libspdk_vmd.a 00:02:31.895 SO libspdk_idxd.so.12.1 00:02:31.895 SO libspdk_vmd.so.6.0 00:02:31.895 SYMLINK libspdk_idxd.so 00:02:31.895 SYMLINK libspdk_vmd.so 00:02:31.895 CC lib/jsonrpc/jsonrpc_server.o 00:02:31.895 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:31.895 CC lib/jsonrpc/jsonrpc_client.o 00:02:31.895 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:31.896 CC lib/rdma_provider/common.o 00:02:31.896 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:32.157 LIB libspdk_rdma_provider.a 00:02:32.157 LIB libspdk_jsonrpc.a 00:02:32.157 SO libspdk_rdma_provider.so.7.0 00:02:32.157 SO libspdk_jsonrpc.so.6.0 00:02:32.418 SYMLINK libspdk_rdma_provider.so 00:02:32.418 SYMLINK libspdk_jsonrpc.so 00:02:32.418 LIB libspdk_env_dpdk.a 00:02:32.418 SO libspdk_env_dpdk.so.15.1 00:02:32.679 SYMLINK libspdk_env_dpdk.so 00:02:32.679 CC lib/rpc/rpc.o 00:02:32.939 LIB libspdk_rpc.a 00:02:32.939 SO libspdk_rpc.so.6.0 00:02:32.939 SYMLINK libspdk_rpc.so 00:02:33.510 CC lib/notify/notify.o 00:02:33.510 CC lib/trace/trace.o 00:02:33.510 CC lib/trace/trace_flags.o 00:02:33.510 CC lib/notify/notify_rpc.o 00:02:33.510 CC lib/trace/trace_rpc.o 00:02:33.510 CC lib/keyring/keyring.o 00:02:33.510 CC lib/keyring/keyring_rpc.o 00:02:33.510 LIB libspdk_notify.a 00:02:33.510 SO libspdk_notify.so.6.0 00:02:33.510 LIB libspdk_keyring.a 00:02:33.772 LIB libspdk_trace.a 00:02:33.772 SO libspdk_keyring.so.2.0 00:02:33.772 SO libspdk_trace.so.11.0 00:02:33.772 SYMLINK libspdk_notify.so 00:02:33.772 SYMLINK libspdk_keyring.so 00:02:33.772 SYMLINK libspdk_trace.so 00:02:34.035 CC lib/sock/sock.o 00:02:34.035 CC lib/sock/sock_rpc.o 00:02:34.035 CC lib/thread/thread.o 00:02:34.035 CC lib/thread/iobuf.o 00:02:34.608 LIB libspdk_sock.a 00:02:34.608 SO libspdk_sock.so.10.0 00:02:34.608 SYMLINK libspdk_sock.so 00:02:34.870 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:34.870 CC lib/nvme/nvme_ctrlr.o 00:02:34.870 CC lib/nvme/nvme_fabric.o 00:02:34.870 CC lib/nvme/nvme_ns_cmd.o 00:02:34.870 CC lib/nvme/nvme_ns.o 00:02:34.870 CC lib/nvme/nvme_pcie_common.o 00:02:34.870 CC lib/nvme/nvme_pcie.o 00:02:34.870 CC lib/nvme/nvme_qpair.o 00:02:34.870 CC lib/nvme/nvme.o 00:02:34.870 CC lib/nvme/nvme_quirks.o 00:02:34.870 CC lib/nvme/nvme_transport.o 00:02:34.870 CC lib/nvme/nvme_discovery.o 00:02:34.870 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:34.870 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:34.870 CC lib/nvme/nvme_tcp.o 00:02:35.132 CC lib/nvme/nvme_opal.o 00:02:35.132 CC lib/nvme/nvme_io_msg.o 00:02:35.132 CC lib/nvme/nvme_poll_group.o 00:02:35.132 CC lib/nvme/nvme_zns.o 00:02:35.132 CC lib/nvme/nvme_stubs.o 00:02:35.132 CC lib/nvme/nvme_auth.o 00:02:35.132 CC lib/nvme/nvme_cuse.o 00:02:35.132 CC lib/nvme/nvme_vfio_user.o 00:02:35.132 CC lib/nvme/nvme_rdma.o 00:02:35.393 LIB libspdk_thread.a 00:02:35.393 SO libspdk_thread.so.11.0 00:02:35.653 SYMLINK libspdk_thread.so 00:02:35.913 CC lib/accel/accel.o 00:02:35.913 CC lib/accel/accel_rpc.o 00:02:35.913 CC lib/virtio/virtio.o 00:02:35.913 CC lib/accel/accel_sw.o 00:02:35.913 CC lib/virtio/virtio_vhost_user.o 00:02:35.913 CC lib/virtio/virtio_vfio_user.o 00:02:35.913 CC lib/virtio/virtio_pci.o 00:02:35.913 CC lib/fsdev/fsdev.o 00:02:35.913 CC lib/fsdev/fsdev_io.o 00:02:35.913 CC lib/vfu_tgt/tgt_endpoint.o 00:02:35.913 CC lib/fsdev/fsdev_rpc.o 00:02:35.913 CC lib/vfu_tgt/tgt_rpc.o 00:02:35.913 CC lib/blob/blobstore.o 00:02:35.914 CC lib/blob/request.o 00:02:35.914 CC lib/blob/zeroes.o 00:02:35.914 CC lib/blob/blob_bs_dev.o 00:02:35.914 CC lib/init/json_config.o 00:02:35.914 CC lib/init/subsystem.o 00:02:35.914 CC lib/init/subsystem_rpc.o 00:02:35.914 CC lib/init/rpc.o 00:02:36.175 LIB libspdk_init.a 00:02:36.175 SO libspdk_init.so.6.0 00:02:36.175 LIB libspdk_virtio.a 00:02:36.436 LIB libspdk_vfu_tgt.a 00:02:36.436 SO libspdk_virtio.so.7.0 00:02:36.436 SO libspdk_vfu_tgt.so.3.0 00:02:36.436 SYMLINK libspdk_init.so 00:02:36.436 SYMLINK libspdk_virtio.so 00:02:36.436 SYMLINK libspdk_vfu_tgt.so 00:02:36.698 LIB libspdk_fsdev.a 00:02:36.698 SO libspdk_fsdev.so.2.0 00:02:36.698 SYMLINK libspdk_fsdev.so 00:02:36.698 CC lib/event/app.o 00:02:36.698 CC lib/event/reactor.o 00:02:36.698 CC lib/event/log_rpc.o 00:02:36.698 CC lib/event/app_rpc.o 00:02:36.698 CC lib/event/scheduler_static.o 00:02:36.959 LIB libspdk_accel.a 00:02:36.959 SO libspdk_accel.so.16.0 00:02:36.959 LIB libspdk_nvme.a 00:02:36.959 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:36.959 SYMLINK libspdk_accel.so 00:02:37.220 LIB libspdk_event.a 00:02:37.220 SO libspdk_nvme.so.15.0 00:02:37.220 SO libspdk_event.so.14.0 00:02:37.220 SYMLINK libspdk_event.so 00:02:37.481 SYMLINK libspdk_nvme.so 00:02:37.481 CC lib/bdev/bdev.o 00:02:37.481 CC lib/bdev/bdev_rpc.o 00:02:37.481 CC lib/bdev/bdev_zone.o 00:02:37.481 CC lib/bdev/part.o 00:02:37.481 CC lib/bdev/scsi_nvme.o 00:02:37.742 LIB libspdk_fuse_dispatcher.a 00:02:37.742 SO libspdk_fuse_dispatcher.so.1.0 00:02:37.742 SYMLINK libspdk_fuse_dispatcher.so 00:02:38.686 LIB libspdk_blob.a 00:02:38.686 SO libspdk_blob.so.11.0 00:02:38.686 SYMLINK libspdk_blob.so 00:02:39.260 CC lib/lvol/lvol.o 00:02:39.260 CC lib/blobfs/blobfs.o 00:02:39.260 CC lib/blobfs/tree.o 00:02:39.833 LIB libspdk_bdev.a 00:02:39.833 SO libspdk_bdev.so.17.0 00:02:39.833 LIB libspdk_blobfs.a 00:02:39.833 SO libspdk_blobfs.so.10.0 00:02:39.833 LIB libspdk_lvol.a 00:02:39.833 SYMLINK libspdk_bdev.so 00:02:40.094 SO libspdk_lvol.so.10.0 00:02:40.094 SYMLINK libspdk_blobfs.so 00:02:40.094 SYMLINK libspdk_lvol.so 00:02:40.357 CC lib/nvmf/ctrlr.o 00:02:40.357 CC lib/nvmf/ctrlr_discovery.o 00:02:40.357 CC lib/scsi/dev.o 00:02:40.357 CC lib/nvmf/ctrlr_bdev.o 00:02:40.357 CC lib/nvmf/subsystem.o 00:02:40.357 CC lib/scsi/lun.o 00:02:40.357 CC lib/scsi/port.o 00:02:40.357 CC lib/ftl/ftl_core.o 00:02:40.357 CC lib/nvmf/nvmf.o 00:02:40.357 CC lib/scsi/scsi.o 00:02:40.357 CC lib/nvmf/nvmf_rpc.o 00:02:40.357 CC lib/ftl/ftl_init.o 00:02:40.357 CC lib/nbd/nbd.o 00:02:40.357 CC lib/scsi/scsi_bdev.o 00:02:40.357 CC lib/scsi/scsi_pr.o 00:02:40.357 CC lib/nvmf/transport.o 00:02:40.357 CC lib/ftl/ftl_layout.o 00:02:40.357 CC lib/nbd/nbd_rpc.o 00:02:40.357 CC lib/ftl/ftl_debug.o 00:02:40.357 CC lib/nvmf/tcp.o 00:02:40.357 CC lib/scsi/scsi_rpc.o 00:02:40.357 CC lib/ftl/ftl_io.o 00:02:40.357 CC lib/nvmf/stubs.o 00:02:40.357 CC lib/scsi/task.o 00:02:40.357 CC lib/nvmf/mdns_server.o 00:02:40.357 CC lib/ublk/ublk.o 00:02:40.357 CC lib/ftl/ftl_sb.o 00:02:40.357 CC lib/nvmf/vfio_user.o 00:02:40.357 CC lib/ftl/ftl_l2p.o 00:02:40.357 CC lib/ublk/ublk_rpc.o 00:02:40.357 CC lib/ftl/ftl_l2p_flat.o 00:02:40.357 CC lib/nvmf/rdma.o 00:02:40.357 CC lib/nvmf/auth.o 00:02:40.357 CC lib/ftl/ftl_nv_cache.o 00:02:40.357 CC lib/ftl/ftl_band.o 00:02:40.357 CC lib/ftl/ftl_band_ops.o 00:02:40.357 CC lib/ftl/ftl_writer.o 00:02:40.357 CC lib/ftl/ftl_rq.o 00:02:40.357 CC lib/ftl/ftl_reloc.o 00:02:40.357 CC lib/ftl/ftl_l2p_cache.o 00:02:40.357 CC lib/ftl/ftl_p2l.o 00:02:40.357 CC lib/ftl/ftl_p2l_log.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:40.357 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:40.357 CC lib/ftl/utils/ftl_conf.o 00:02:40.357 CC lib/ftl/utils/ftl_md.o 00:02:40.357 CC lib/ftl/utils/ftl_mempool.o 00:02:40.357 CC lib/ftl/utils/ftl_bitmap.o 00:02:40.357 CC lib/ftl/utils/ftl_property.o 00:02:40.357 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:40.357 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:40.357 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:40.357 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:40.357 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:40.357 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:40.357 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:40.357 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:40.357 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:40.357 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:40.357 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:40.357 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:40.357 CC lib/ftl/base/ftl_base_dev.o 00:02:40.357 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:40.357 CC lib/ftl/base/ftl_base_bdev.o 00:02:40.357 CC lib/ftl/ftl_trace.o 00:02:41.300 LIB libspdk_nbd.a 00:02:41.300 SO libspdk_nbd.so.7.0 00:02:41.300 LIB libspdk_scsi.a 00:02:41.300 SYMLINK libspdk_nbd.so 00:02:41.300 SO libspdk_scsi.so.9.0 00:02:41.300 SYMLINK libspdk_scsi.so 00:02:41.300 LIB libspdk_ublk.a 00:02:41.300 SO libspdk_ublk.so.3.0 00:02:41.562 SYMLINK libspdk_ublk.so 00:02:41.562 LIB libspdk_ftl.a 00:02:41.562 CC lib/vhost/vhost.o 00:02:41.562 CC lib/vhost/vhost_rpc.o 00:02:41.562 CC lib/vhost/vhost_scsi.o 00:02:41.562 CC lib/vhost/vhost_blk.o 00:02:41.562 CC lib/vhost/rte_vhost_user.o 00:02:41.562 CC lib/iscsi/conn.o 00:02:41.562 CC lib/iscsi/init_grp.o 00:02:41.562 CC lib/iscsi/iscsi.o 00:02:41.562 CC lib/iscsi/param.o 00:02:41.562 CC lib/iscsi/portal_grp.o 00:02:41.562 CC lib/iscsi/tgt_node.o 00:02:41.562 CC lib/iscsi/iscsi_subsystem.o 00:02:41.562 CC lib/iscsi/iscsi_rpc.o 00:02:41.562 CC lib/iscsi/task.o 00:02:41.824 SO libspdk_ftl.so.9.0 00:02:42.086 SYMLINK libspdk_ftl.so 00:02:42.804 LIB libspdk_nvmf.a 00:02:42.804 SO libspdk_nvmf.so.20.0 00:02:42.804 LIB libspdk_vhost.a 00:02:42.805 SO libspdk_vhost.so.8.0 00:02:43.065 SYMLINK libspdk_vhost.so 00:02:43.066 SYMLINK libspdk_nvmf.so 00:02:43.066 LIB libspdk_iscsi.a 00:02:43.066 SO libspdk_iscsi.so.8.0 00:02:43.066 SYMLINK libspdk_iscsi.so 00:02:43.638 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.638 CC module/vfu_device/vfu_virtio.o 00:02:43.638 CC module/vfu_device/vfu_virtio_blk.o 00:02:43.638 CC module/vfu_device/vfu_virtio_scsi.o 00:02:43.638 CC module/vfu_device/vfu_virtio_rpc.o 00:02:43.638 CC module/vfu_device/vfu_virtio_fs.o 00:02:43.899 CC module/blob/bdev/blob_bdev.o 00:02:43.899 LIB libspdk_env_dpdk_rpc.a 00:02:43.899 CC module/keyring/file/keyring.o 00:02:43.899 CC module/keyring/file/keyring_rpc.o 00:02:43.899 CC module/accel/dsa/accel_dsa.o 00:02:43.899 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.899 CC module/sock/posix/posix.o 00:02:43.899 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.899 CC module/keyring/linux/keyring.o 00:02:43.899 CC module/keyring/linux/keyring_rpc.o 00:02:43.899 CC module/scheduler/gscheduler/gscheduler.o 00:02:43.899 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.899 CC module/accel/error/accel_error.o 00:02:43.899 CC module/fsdev/aio/fsdev_aio.o 00:02:43.899 CC module/accel/error/accel_error_rpc.o 00:02:43.899 CC module/accel/ioat/accel_ioat.o 00:02:43.899 CC module/accel/iaa/accel_iaa.o 00:02:43.899 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:43.899 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.899 CC module/fsdev/aio/linux_aio_mgr.o 00:02:43.899 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.899 SO libspdk_env_dpdk_rpc.so.6.0 00:02:43.899 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.160 LIB libspdk_keyring_file.a 00:02:44.160 LIB libspdk_keyring_linux.a 00:02:44.160 LIB libspdk_scheduler_gscheduler.a 00:02:44.160 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.160 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.160 SO libspdk_keyring_file.so.2.0 00:02:44.160 LIB libspdk_accel_ioat.a 00:02:44.160 SO libspdk_keyring_linux.so.1.0 00:02:44.160 LIB libspdk_accel_iaa.a 00:02:44.160 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.160 LIB libspdk_scheduler_dynamic.a 00:02:44.160 LIB libspdk_accel_error.a 00:02:44.160 SO libspdk_accel_ioat.so.6.0 00:02:44.160 LIB libspdk_accel_dsa.a 00:02:44.160 LIB libspdk_blob_bdev.a 00:02:44.160 SYMLINK libspdk_keyring_file.so 00:02:44.160 SO libspdk_accel_iaa.so.3.0 00:02:44.160 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.160 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.160 SO libspdk_accel_error.so.2.0 00:02:44.160 SYMLINK libspdk_keyring_linux.so 00:02:44.160 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.160 SO libspdk_blob_bdev.so.11.0 00:02:44.160 SO libspdk_accel_dsa.so.5.0 00:02:44.160 SYMLINK libspdk_accel_ioat.so 00:02:44.160 SYMLINK libspdk_accel_error.so 00:02:44.160 SYMLINK libspdk_accel_iaa.so 00:02:44.160 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.421 SYMLINK libspdk_blob_bdev.so 00:02:44.421 SYMLINK libspdk_accel_dsa.so 00:02:44.421 LIB libspdk_vfu_device.a 00:02:44.421 SO libspdk_vfu_device.so.3.0 00:02:44.421 SYMLINK libspdk_vfu_device.so 00:02:44.421 LIB libspdk_fsdev_aio.a 00:02:44.683 SO libspdk_fsdev_aio.so.1.0 00:02:44.683 LIB libspdk_sock_posix.a 00:02:44.683 SO libspdk_sock_posix.so.6.0 00:02:44.683 SYMLINK libspdk_fsdev_aio.so 00:02:44.683 SYMLINK libspdk_sock_posix.so 00:02:44.944 CC module/bdev/aio/bdev_aio.o 00:02:44.944 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.944 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.944 CC module/bdev/nvme/bdev_nvme.o 00:02:44.944 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.944 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.944 CC module/bdev/nvme/nvme_rpc.o 00:02:44.944 CC module/bdev/error/vbdev_error.o 00:02:44.944 CC module/bdev/gpt/gpt.o 00:02:44.944 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.944 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.944 CC module/bdev/raid/bdev_raid.o 00:02:44.944 CC module/bdev/nvme/vbdev_opal.o 00:02:44.944 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.944 CC module/bdev/split/vbdev_split.o 00:02:44.944 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.944 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.944 CC module/bdev/malloc/bdev_malloc.o 00:02:44.944 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.944 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.944 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.944 CC module/bdev/null/bdev_null.o 00:02:44.944 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.944 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.944 CC module/bdev/raid/raid0.o 00:02:44.944 CC module/bdev/null/bdev_null_rpc.o 00:02:44.944 CC module/bdev/delay/vbdev_delay.o 00:02:44.944 CC module/bdev/raid/raid1.o 00:02:44.944 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.944 CC module/bdev/raid/concat.o 00:02:44.944 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.944 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.944 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.944 CC module/bdev/ftl/bdev_ftl.o 00:02:44.944 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.944 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.944 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.944 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.944 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.944 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.944 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.944 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:45.204 LIB libspdk_blobfs_bdev.a 00:02:45.204 SO libspdk_blobfs_bdev.so.6.0 00:02:45.204 LIB libspdk_bdev_split.a 00:02:45.204 LIB libspdk_bdev_null.a 00:02:45.204 SYMLINK libspdk_blobfs_bdev.so 00:02:45.204 SO libspdk_bdev_split.so.6.0 00:02:45.204 LIB libspdk_bdev_aio.a 00:02:45.204 SO libspdk_bdev_null.so.6.0 00:02:45.204 LIB libspdk_bdev_error.a 00:02:45.204 LIB libspdk_bdev_passthru.a 00:02:45.204 LIB libspdk_bdev_gpt.a 00:02:45.204 LIB libspdk_bdev_ftl.a 00:02:45.204 SO libspdk_bdev_error.so.6.0 00:02:45.465 SO libspdk_bdev_aio.so.6.0 00:02:45.465 SO libspdk_bdev_passthru.so.6.0 00:02:45.465 SO libspdk_bdev_gpt.so.6.0 00:02:45.465 SYMLINK libspdk_bdev_split.so 00:02:45.465 LIB libspdk_bdev_zone_block.a 00:02:45.465 SO libspdk_bdev_ftl.so.6.0 00:02:45.465 SYMLINK libspdk_bdev_null.so 00:02:45.465 LIB libspdk_bdev_malloc.a 00:02:45.465 LIB libspdk_bdev_delay.a 00:02:45.465 LIB libspdk_bdev_iscsi.a 00:02:45.465 SYMLINK libspdk_bdev_error.so 00:02:45.466 SO libspdk_bdev_zone_block.so.6.0 00:02:45.466 SYMLINK libspdk_bdev_aio.so 00:02:45.466 SYMLINK libspdk_bdev_gpt.so 00:02:45.466 SYMLINK libspdk_bdev_passthru.so 00:02:45.466 SO libspdk_bdev_delay.so.6.0 00:02:45.466 SO libspdk_bdev_malloc.so.6.0 00:02:45.466 SYMLINK libspdk_bdev_ftl.so 00:02:45.466 SO libspdk_bdev_iscsi.so.6.0 00:02:45.466 SYMLINK libspdk_bdev_zone_block.so 00:02:45.466 SYMLINK libspdk_bdev_delay.so 00:02:45.466 SYMLINK libspdk_bdev_malloc.so 00:02:45.466 SYMLINK libspdk_bdev_iscsi.so 00:02:45.466 LIB libspdk_bdev_lvol.a 00:02:45.466 LIB libspdk_bdev_virtio.a 00:02:45.466 SO libspdk_bdev_lvol.so.6.0 00:02:45.466 SO libspdk_bdev_virtio.so.6.0 00:02:45.727 SYMLINK libspdk_bdev_lvol.so 00:02:45.727 SYMLINK libspdk_bdev_virtio.so 00:02:45.989 LIB libspdk_bdev_raid.a 00:02:45.989 SO libspdk_bdev_raid.so.6.0 00:02:45.989 SYMLINK libspdk_bdev_raid.so 00:02:47.375 LIB libspdk_bdev_nvme.a 00:02:47.375 SO libspdk_bdev_nvme.so.7.1 00:02:47.375 SYMLINK libspdk_bdev_nvme.so 00:02:47.948 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.948 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.948 CC module/event/subsystems/sock/sock.o 00:02:47.948 CC module/event/subsystems/vmd/vmd.o 00:02:47.948 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.948 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.948 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:47.948 CC module/event/subsystems/keyring/keyring.o 00:02:47.948 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.948 CC module/event/subsystems/fsdev/fsdev.o 00:02:48.209 LIB libspdk_event_fsdev.a 00:02:48.209 LIB libspdk_event_sock.a 00:02:48.209 LIB libspdk_event_keyring.a 00:02:48.209 LIB libspdk_event_iobuf.a 00:02:48.209 LIB libspdk_event_vmd.a 00:02:48.209 LIB libspdk_event_scheduler.a 00:02:48.209 LIB libspdk_event_vhost_blk.a 00:02:48.209 LIB libspdk_event_vfu_tgt.a 00:02:48.209 SO libspdk_event_sock.so.5.0 00:02:48.209 SO libspdk_event_fsdev.so.1.0 00:02:48.209 SO libspdk_event_keyring.so.1.0 00:02:48.209 SO libspdk_event_iobuf.so.3.0 00:02:48.209 SO libspdk_event_scheduler.so.4.0 00:02:48.209 SO libspdk_event_vmd.so.6.0 00:02:48.209 SO libspdk_event_vhost_blk.so.3.0 00:02:48.209 SO libspdk_event_vfu_tgt.so.3.0 00:02:48.469 SYMLINK libspdk_event_sock.so 00:02:48.469 SYMLINK libspdk_event_fsdev.so 00:02:48.469 SYMLINK libspdk_event_keyring.so 00:02:48.469 SYMLINK libspdk_event_vhost_blk.so 00:02:48.469 SYMLINK libspdk_event_vmd.so 00:02:48.469 SYMLINK libspdk_event_vfu_tgt.so 00:02:48.469 SYMLINK libspdk_event_iobuf.so 00:02:48.469 SYMLINK libspdk_event_scheduler.so 00:02:48.730 CC module/event/subsystems/accel/accel.o 00:02:48.991 LIB libspdk_event_accel.a 00:02:48.991 SO libspdk_event_accel.so.6.0 00:02:48.991 SYMLINK libspdk_event_accel.so 00:02:49.252 CC module/event/subsystems/bdev/bdev.o 00:02:49.513 LIB libspdk_event_bdev.a 00:02:49.513 SO libspdk_event_bdev.so.6.0 00:02:49.773 SYMLINK libspdk_event_bdev.so 00:02:50.034 CC module/event/subsystems/nbd/nbd.o 00:02:50.034 CC module/event/subsystems/scsi/scsi.o 00:02:50.034 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:50.034 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:50.034 CC module/event/subsystems/ublk/ublk.o 00:02:50.295 LIB libspdk_event_ublk.a 00:02:50.295 LIB libspdk_event_scsi.a 00:02:50.295 LIB libspdk_event_nbd.a 00:02:50.295 SO libspdk_event_ublk.so.3.0 00:02:50.295 SO libspdk_event_scsi.so.6.0 00:02:50.295 SO libspdk_event_nbd.so.6.0 00:02:50.295 LIB libspdk_event_nvmf.a 00:02:50.295 SYMLINK libspdk_event_ublk.so 00:02:50.295 SYMLINK libspdk_event_nbd.so 00:02:50.295 SYMLINK libspdk_event_scsi.so 00:02:50.295 SO libspdk_event_nvmf.so.6.0 00:02:50.295 SYMLINK libspdk_event_nvmf.so 00:02:50.557 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.557 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.819 LIB libspdk_event_vhost_scsi.a 00:02:50.819 LIB libspdk_event_iscsi.a 00:02:50.819 SO libspdk_event_vhost_scsi.so.3.0 00:02:50.819 SO libspdk_event_iscsi.so.6.0 00:02:50.819 SYMLINK libspdk_event_vhost_scsi.so 00:02:51.081 SYMLINK libspdk_event_iscsi.so 00:02:51.081 SO libspdk.so.6.0 00:02:51.081 SYMLINK libspdk.so 00:02:51.655 CXX app/trace/trace.o 00:02:51.655 CC app/trace_record/trace_record.o 00:02:51.655 CC app/spdk_nvme_perf/perf.o 00:02:51.655 CC app/spdk_nvme_discover/discovery_aer.o 00:02:51.655 CC app/spdk_top/spdk_top.o 00:02:51.655 TEST_HEADER include/spdk/accel.h 00:02:51.655 TEST_HEADER include/spdk/accel_module.h 00:02:51.655 TEST_HEADER include/spdk/assert.h 00:02:51.655 TEST_HEADER include/spdk/barrier.h 00:02:51.655 TEST_HEADER include/spdk/base64.h 00:02:51.655 CC app/spdk_nvme_identify/identify.o 00:02:51.655 TEST_HEADER include/spdk/bdev_module.h 00:02:51.655 TEST_HEADER include/spdk/bdev.h 00:02:51.655 TEST_HEADER include/spdk/bdev_zone.h 00:02:51.655 TEST_HEADER include/spdk/bit_array.h 00:02:51.655 CC test/rpc_client/rpc_client_test.o 00:02:51.655 TEST_HEADER include/spdk/bit_pool.h 00:02:51.655 TEST_HEADER include/spdk/blob_bdev.h 00:02:51.655 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:51.655 TEST_HEADER include/spdk/blobfs.h 00:02:51.655 CC app/spdk_lspci/spdk_lspci.o 00:02:51.655 TEST_HEADER include/spdk/blob.h 00:02:51.655 TEST_HEADER include/spdk/conf.h 00:02:51.655 TEST_HEADER include/spdk/config.h 00:02:51.655 TEST_HEADER include/spdk/cpuset.h 00:02:51.655 TEST_HEADER include/spdk/crc16.h 00:02:51.655 TEST_HEADER include/spdk/crc32.h 00:02:51.655 TEST_HEADER include/spdk/crc64.h 00:02:51.655 TEST_HEADER include/spdk/dif.h 00:02:51.655 TEST_HEADER include/spdk/dma.h 00:02:51.655 TEST_HEADER include/spdk/endian.h 00:02:51.655 TEST_HEADER include/spdk/env_dpdk.h 00:02:51.655 TEST_HEADER include/spdk/env.h 00:02:51.656 TEST_HEADER include/spdk/event.h 00:02:51.656 TEST_HEADER include/spdk/fd_group.h 00:02:51.656 TEST_HEADER include/spdk/fd.h 00:02:51.656 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:51.656 TEST_HEADER include/spdk/fsdev.h 00:02:51.656 TEST_HEADER include/spdk/file.h 00:02:51.656 TEST_HEADER include/spdk/fsdev_module.h 00:02:51.656 TEST_HEADER include/spdk/ftl.h 00:02:51.656 TEST_HEADER include/spdk/gpt_spec.h 00:02:51.656 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:51.656 TEST_HEADER include/spdk/hexlify.h 00:02:51.656 TEST_HEADER include/spdk/histogram_data.h 00:02:51.656 CC app/spdk_dd/spdk_dd.o 00:02:51.656 CC app/nvmf_tgt/nvmf_main.o 00:02:51.656 TEST_HEADER include/spdk/idxd_spec.h 00:02:51.656 TEST_HEADER include/spdk/idxd.h 00:02:51.656 TEST_HEADER include/spdk/init.h 00:02:51.656 TEST_HEADER include/spdk/ioat.h 00:02:51.656 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.656 TEST_HEADER include/spdk/ioat_spec.h 00:02:51.656 TEST_HEADER include/spdk/iscsi_spec.h 00:02:51.656 TEST_HEADER include/spdk/json.h 00:02:51.656 TEST_HEADER include/spdk/jsonrpc.h 00:02:51.656 TEST_HEADER include/spdk/keyring.h 00:02:51.656 TEST_HEADER include/spdk/likely.h 00:02:51.656 TEST_HEADER include/spdk/keyring_module.h 00:02:51.656 TEST_HEADER include/spdk/log.h 00:02:51.656 TEST_HEADER include/spdk/lvol.h 00:02:51.656 TEST_HEADER include/spdk/memory.h 00:02:51.656 TEST_HEADER include/spdk/mmio.h 00:02:51.656 TEST_HEADER include/spdk/md5.h 00:02:51.656 TEST_HEADER include/spdk/nbd.h 00:02:51.656 TEST_HEADER include/spdk/notify.h 00:02:51.656 TEST_HEADER include/spdk/net.h 00:02:51.656 TEST_HEADER include/spdk/nvme.h 00:02:51.656 CC app/spdk_tgt/spdk_tgt.o 00:02:51.656 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:51.656 TEST_HEADER include/spdk/nvme_intel.h 00:02:51.656 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:51.656 TEST_HEADER include/spdk/nvme_spec.h 00:02:51.656 TEST_HEADER include/spdk/nvme_zns.h 00:02:51.656 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:51.656 TEST_HEADER include/spdk/nvmf_spec.h 00:02:51.656 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:51.656 TEST_HEADER include/spdk/nvmf.h 00:02:51.656 TEST_HEADER include/spdk/nvmf_transport.h 00:02:51.656 TEST_HEADER include/spdk/opal.h 00:02:51.656 TEST_HEADER include/spdk/opal_spec.h 00:02:51.656 TEST_HEADER include/spdk/pci_ids.h 00:02:51.656 TEST_HEADER include/spdk/queue.h 00:02:51.656 TEST_HEADER include/spdk/pipe.h 00:02:51.656 TEST_HEADER include/spdk/reduce.h 00:02:51.656 TEST_HEADER include/spdk/rpc.h 00:02:51.656 TEST_HEADER include/spdk/scheduler.h 00:02:51.656 TEST_HEADER include/spdk/scsi.h 00:02:51.656 TEST_HEADER include/spdk/scsi_spec.h 00:02:51.656 TEST_HEADER include/spdk/sock.h 00:02:51.656 TEST_HEADER include/spdk/stdinc.h 00:02:51.656 TEST_HEADER include/spdk/string.h 00:02:51.656 TEST_HEADER include/spdk/thread.h 00:02:51.656 TEST_HEADER include/spdk/trace.h 00:02:51.656 TEST_HEADER include/spdk/trace_parser.h 00:02:51.656 TEST_HEADER include/spdk/tree.h 00:02:51.656 TEST_HEADER include/spdk/ublk.h 00:02:51.656 TEST_HEADER include/spdk/util.h 00:02:51.656 TEST_HEADER include/spdk/uuid.h 00:02:51.656 TEST_HEADER include/spdk/version.h 00:02:51.656 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:51.656 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:51.656 TEST_HEADER include/spdk/vhost.h 00:02:51.656 TEST_HEADER include/spdk/vmd.h 00:02:51.656 TEST_HEADER include/spdk/xor.h 00:02:51.656 TEST_HEADER include/spdk/zipf.h 00:02:51.656 CXX test/cpp_headers/accel.o 00:02:51.656 CXX test/cpp_headers/accel_module.o 00:02:51.656 CXX test/cpp_headers/assert.o 00:02:51.656 CXX test/cpp_headers/barrier.o 00:02:51.656 CXX test/cpp_headers/base64.o 00:02:51.656 CXX test/cpp_headers/bdev.o 00:02:51.656 CXX test/cpp_headers/bdev_module.o 00:02:51.656 CXX test/cpp_headers/bit_pool.o 00:02:51.656 CXX test/cpp_headers/bdev_zone.o 00:02:51.656 CXX test/cpp_headers/bit_array.o 00:02:51.656 CXX test/cpp_headers/blob_bdev.o 00:02:51.656 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.656 CXX test/cpp_headers/blobfs.o 00:02:51.656 CXX test/cpp_headers/conf.o 00:02:51.656 CXX test/cpp_headers/blob.o 00:02:51.656 CXX test/cpp_headers/config.o 00:02:51.656 CXX test/cpp_headers/crc16.o 00:02:51.656 CXX test/cpp_headers/cpuset.o 00:02:51.656 CXX test/cpp_headers/crc64.o 00:02:51.656 CXX test/cpp_headers/crc32.o 00:02:51.656 CXX test/cpp_headers/endian.o 00:02:51.656 CXX test/cpp_headers/dif.o 00:02:51.656 CXX test/cpp_headers/dma.o 00:02:51.656 CXX test/cpp_headers/env_dpdk.o 00:02:51.656 CXX test/cpp_headers/event.o 00:02:51.656 CXX test/cpp_headers/env.o 00:02:51.656 CXX test/cpp_headers/fd_group.o 00:02:51.656 CXX test/cpp_headers/fd.o 00:02:51.656 CXX test/cpp_headers/file.o 00:02:51.656 CXX test/cpp_headers/fsdev.o 00:02:51.656 CXX test/cpp_headers/fsdev_module.o 00:02:51.656 CXX test/cpp_headers/fuse_dispatcher.o 00:02:51.656 CXX test/cpp_headers/ftl.o 00:02:51.656 CXX test/cpp_headers/gpt_spec.o 00:02:51.656 CXX test/cpp_headers/hexlify.o 00:02:51.656 CXX test/cpp_headers/histogram_data.o 00:02:51.656 CXX test/cpp_headers/idxd.o 00:02:51.656 CXX test/cpp_headers/idxd_spec.o 00:02:51.925 CXX test/cpp_headers/ioat.o 00:02:51.925 CXX test/cpp_headers/init.o 00:02:51.925 CXX test/cpp_headers/ioat_spec.o 00:02:51.925 CXX test/cpp_headers/json.o 00:02:51.925 CXX test/cpp_headers/jsonrpc.o 00:02:51.925 CXX test/cpp_headers/iscsi_spec.o 00:02:51.925 CXX test/cpp_headers/keyring.o 00:02:51.925 CXX test/cpp_headers/keyring_module.o 00:02:51.925 CXX test/cpp_headers/likely.o 00:02:51.925 CXX test/cpp_headers/log.o 00:02:51.925 CXX test/cpp_headers/lvol.o 00:02:51.925 CXX test/cpp_headers/mmio.o 00:02:51.925 CXX test/cpp_headers/md5.o 00:02:51.925 CXX test/cpp_headers/memory.o 00:02:51.925 CC examples/ioat/perf/perf.o 00:02:51.925 CXX test/cpp_headers/nbd.o 00:02:51.925 CXX test/cpp_headers/notify.o 00:02:51.925 CC examples/util/zipf/zipf.o 00:02:51.925 CXX test/cpp_headers/net.o 00:02:51.925 CXX test/cpp_headers/nvme_intel.o 00:02:51.925 CXX test/cpp_headers/nvme.o 00:02:51.925 CXX test/cpp_headers/nvme_spec.o 00:02:51.925 CXX test/cpp_headers/nvme_ocssd.o 00:02:51.925 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:51.925 CXX test/cpp_headers/nvmf_cmd.o 00:02:51.925 CXX test/cpp_headers/nvme_zns.o 00:02:51.925 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:51.925 CXX test/cpp_headers/nvmf_spec.o 00:02:51.925 CXX test/cpp_headers/nvmf.o 00:02:51.925 CXX test/cpp_headers/nvmf_transport.o 00:02:51.925 CXX test/cpp_headers/pipe.o 00:02:51.925 CXX test/cpp_headers/opal.o 00:02:51.926 CC examples/ioat/verify/verify.o 00:02:51.926 CXX test/cpp_headers/opal_spec.o 00:02:51.926 CXX test/cpp_headers/pci_ids.o 00:02:51.926 CC test/app/histogram_perf/histogram_perf.o 00:02:51.926 CXX test/cpp_headers/queue.o 00:02:51.926 CC test/app/jsoncat/jsoncat.o 00:02:51.926 LINK spdk_lspci 00:02:51.926 CXX test/cpp_headers/reduce.o 00:02:51.926 CC test/dma/test_dma/test_dma.o 00:02:51.926 CXX test/cpp_headers/scsi.o 00:02:51.926 CXX test/cpp_headers/rpc.o 00:02:51.926 CXX test/cpp_headers/scheduler.o 00:02:51.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.926 CC test/thread/poller_perf/poller_perf.o 00:02:51.926 CXX test/cpp_headers/scsi_spec.o 00:02:51.926 CC test/env/vtophys/vtophys.o 00:02:51.926 CXX test/cpp_headers/sock.o 00:02:51.926 CXX test/cpp_headers/stdinc.o 00:02:51.926 CXX test/cpp_headers/thread.o 00:02:51.926 CXX test/cpp_headers/string.o 00:02:51.926 CC test/app/stub/stub.o 00:02:51.926 CXX test/cpp_headers/trace.o 00:02:51.926 CXX test/cpp_headers/trace_parser.o 00:02:51.926 CXX test/cpp_headers/tree.o 00:02:51.926 CXX test/cpp_headers/util.o 00:02:51.926 CXX test/cpp_headers/ublk.o 00:02:51.926 CC app/fio/nvme/fio_plugin.o 00:02:51.926 CXX test/cpp_headers/uuid.o 00:02:51.926 CXX test/cpp_headers/vfio_user_pci.o 00:02:51.926 CXX test/cpp_headers/version.o 00:02:51.926 CXX test/cpp_headers/vfio_user_spec.o 00:02:51.926 CXX test/cpp_headers/vhost.o 00:02:51.926 CXX test/cpp_headers/zipf.o 00:02:51.926 CXX test/cpp_headers/vmd.o 00:02:51.926 CC test/env/pci/pci_ut.o 00:02:51.926 CXX test/cpp_headers/xor.o 00:02:51.926 CC test/env/memory/memory_ut.o 00:02:51.926 CC test/app/bdev_svc/bdev_svc.o 00:02:51.926 CC app/fio/bdev/fio_plugin.o 00:02:52.197 LINK spdk_nvme_discover 00:02:52.197 LINK rpc_client_test 00:02:52.197 LINK nvmf_tgt 00:02:52.197 LINK interrupt_tgt 00:02:52.468 LINK iscsi_tgt 00:02:52.468 LINK spdk_tgt 00:02:52.732 LINK spdk_trace_record 00:02:52.732 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:52.732 CC test/env/mem_callbacks/mem_callbacks.o 00:02:52.732 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:52.732 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:52.732 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:52.732 LINK spdk_dd 00:02:52.732 LINK verify 00:02:52.991 LINK zipf 00:02:52.991 LINK histogram_perf 00:02:52.991 LINK ioat_perf 00:02:52.991 LINK jsoncat 00:02:52.991 LINK vtophys 00:02:52.991 LINK env_dpdk_post_init 00:02:52.991 LINK poller_perf 00:02:52.991 LINK bdev_svc 00:02:53.251 LINK stub 00:02:53.251 LINK spdk_trace 00:02:53.251 LINK spdk_bdev 00:02:53.511 LINK spdk_nvme_identify 00:02:53.511 LINK pci_ut 00:02:53.511 LINK vhost_fuzz 00:02:53.511 LINK nvme_fuzz 00:02:53.511 CC examples/vmd/led/led.o 00:02:53.511 LINK spdk_nvme 00:02:53.511 CC examples/idxd/perf/perf.o 00:02:53.511 LINK test_dma 00:02:53.511 CC examples/vmd/lsvmd/lsvmd.o 00:02:53.511 CC examples/sock/hello_world/hello_sock.o 00:02:53.511 LINK spdk_top 00:02:53.511 LINK spdk_nvme_perf 00:02:53.511 CC examples/thread/thread/thread_ex.o 00:02:53.511 CC app/vhost/vhost.o 00:02:53.511 LINK mem_callbacks 00:02:53.772 CC test/event/reactor_perf/reactor_perf.o 00:02:53.772 CC test/event/event_perf/event_perf.o 00:02:53.772 CC test/event/reactor/reactor.o 00:02:53.772 CC test/event/app_repeat/app_repeat.o 00:02:53.772 CC test/event/scheduler/scheduler.o 00:02:53.772 LINK led 00:02:53.772 LINK lsvmd 00:02:53.772 LINK reactor_perf 00:02:53.772 LINK event_perf 00:02:53.772 LINK hello_sock 00:02:53.772 LINK reactor 00:02:53.772 LINK vhost 00:02:53.772 LINK idxd_perf 00:02:53.772 LINK app_repeat 00:02:53.772 LINK thread 00:02:54.033 LINK scheduler 00:02:54.033 LINK memory_ut 00:02:54.033 CC test/nvme/aer/aer.o 00:02:54.033 CC test/nvme/reset/reset.o 00:02:54.033 CC test/nvme/startup/startup.o 00:02:54.033 CC test/nvme/overhead/overhead.o 00:02:54.033 CC test/nvme/e2edp/nvme_dp.o 00:02:54.033 CC test/nvme/sgl/sgl.o 00:02:54.033 CC test/nvme/cuse/cuse.o 00:02:54.033 CC test/nvme/compliance/nvme_compliance.o 00:02:54.033 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.293 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.293 CC test/nvme/connect_stress/connect_stress.o 00:02:54.293 CC test/nvme/reserve/reserve.o 00:02:54.293 CC test/nvme/simple_copy/simple_copy.o 00:02:54.293 CC test/nvme/fdp/fdp.o 00:02:54.293 CC test/nvme/err_injection/err_injection.o 00:02:54.293 CC test/nvme/boot_partition/boot_partition.o 00:02:54.293 CC test/blobfs/mkfs/mkfs.o 00:02:54.293 CC test/accel/dif/dif.o 00:02:54.293 CC test/lvol/esnap/esnap.o 00:02:54.293 LINK startup 00:02:54.554 LINK boot_partition 00:02:54.554 CC examples/nvme/abort/abort.o 00:02:54.554 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:54.554 CC examples/nvme/hotplug/hotplug.o 00:02:54.554 LINK connect_stress 00:02:54.554 CC examples/nvme/arbitration/arbitration.o 00:02:54.554 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:54.554 LINK fused_ordering 00:02:54.554 LINK doorbell_aers 00:02:54.554 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:54.554 CC examples/nvme/reconnect/reconnect.o 00:02:54.554 LINK err_injection 00:02:54.554 LINK reset 00:02:54.554 CC examples/nvme/hello_world/hello_world.o 00:02:54.554 LINK iscsi_fuzz 00:02:54.554 LINK reserve 00:02:54.554 LINK mkfs 00:02:54.554 LINK simple_copy 00:02:54.554 LINK sgl 00:02:54.554 LINK aer 00:02:54.554 LINK nvme_dp 00:02:54.554 CC examples/accel/perf/accel_perf.o 00:02:54.554 LINK overhead 00:02:54.554 CC examples/blob/cli/blobcli.o 00:02:54.554 LINK nvme_compliance 00:02:54.554 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:54.554 CC examples/blob/hello_world/hello_blob.o 00:02:54.554 LINK fdp 00:02:54.816 LINK pmr_persistence 00:02:54.816 LINK cmb_copy 00:02:54.816 LINK hotplug 00:02:54.816 LINK hello_world 00:02:54.816 LINK arbitration 00:02:54.816 LINK reconnect 00:02:54.816 LINK abort 00:02:54.816 LINK hello_blob 00:02:54.816 LINK dif 00:02:54.816 LINK hello_fsdev 00:02:55.077 LINK nvme_manage 00:02:55.077 LINK accel_perf 00:02:55.077 LINK blobcli 00:02:55.339 LINK cuse 00:02:55.602 CC test/bdev/bdevio/bdevio.o 00:02:55.602 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.602 CC examples/bdev/bdevperf/bdevperf.o 00:02:55.862 LINK bdevio 00:02:55.862 LINK hello_bdev 00:02:56.436 LINK bdevperf 00:02:57.009 CC examples/nvmf/nvmf/nvmf.o 00:02:57.270 LINK nvmf 00:02:58.658 LINK esnap 00:02:59.230 00:02:59.230 real 0m56.107s 00:02:59.230 user 8m9.212s 00:02:59.230 sys 6m9.081s 00:02:59.230 10:43:18 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:59.230 10:43:18 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.230 ************************************ 00:02:59.230 END TEST make 00:02:59.230 ************************************ 00:02:59.230 10:43:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.230 10:43:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.230 10:43:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.230 10:43:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.230 10:43:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.230 10:43:18 -- pm/common@44 -- $ pid=63193 00:02:59.230 10:43:18 -- pm/common@50 -- $ kill -TERM 63193 00:02:59.230 10:43:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.230 10:43:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.230 10:43:18 -- pm/common@44 -- $ pid=63194 00:02:59.230 10:43:18 -- pm/common@50 -- $ kill -TERM 63194 00:02:59.230 10:43:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.230 10:43:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:59.230 10:43:18 -- pm/common@44 -- $ pid=63196 00:02:59.230 10:43:18 -- pm/common@50 -- $ kill -TERM 63196 00:02:59.230 10:43:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.230 10:43:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:59.230 10:43:18 -- pm/common@44 -- $ pid=63221 00:02:59.230 10:43:18 -- pm/common@50 -- $ sudo -E kill -TERM 63221 00:02:59.230 10:43:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:59.230 10:43:18 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:59.230 10:43:18 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:59.230 10:43:18 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:59.230 10:43:18 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:59.491 10:43:18 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:59.491 10:43:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:59.491 10:43:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:59.491 10:43:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:59.491 10:43:18 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.491 10:43:18 -- scripts/common.sh@336 -- # read -ra ver1 00:02:59.491 10:43:18 -- scripts/common.sh@337 -- # IFS=.-: 00:02:59.491 10:43:18 -- scripts/common.sh@337 -- # read -ra ver2 00:02:59.491 10:43:18 -- scripts/common.sh@338 -- # local 'op=<' 00:02:59.491 10:43:18 -- scripts/common.sh@340 -- # ver1_l=2 00:02:59.491 10:43:18 -- scripts/common.sh@341 -- # ver2_l=1 00:02:59.491 10:43:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:59.491 10:43:18 -- scripts/common.sh@344 -- # case "$op" in 00:02:59.491 10:43:18 -- scripts/common.sh@345 -- # : 1 00:02:59.491 10:43:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:59.491 10:43:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.491 10:43:18 -- scripts/common.sh@365 -- # decimal 1 00:02:59.492 10:43:18 -- scripts/common.sh@353 -- # local d=1 00:02:59.492 10:43:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.492 10:43:18 -- scripts/common.sh@355 -- # echo 1 00:02:59.492 10:43:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:59.492 10:43:18 -- scripts/common.sh@366 -- # decimal 2 00:02:59.492 10:43:18 -- scripts/common.sh@353 -- # local d=2 00:02:59.492 10:43:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.492 10:43:18 -- scripts/common.sh@355 -- # echo 2 00:02:59.492 10:43:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:59.492 10:43:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:59.492 10:43:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:59.492 10:43:18 -- scripts/common.sh@368 -- # return 0 00:02:59.492 10:43:18 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.492 10:43:18 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:59.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.492 --rc genhtml_branch_coverage=1 00:02:59.492 --rc genhtml_function_coverage=1 00:02:59.492 --rc genhtml_legend=1 00:02:59.492 --rc geninfo_all_blocks=1 00:02:59.492 --rc geninfo_unexecuted_blocks=1 00:02:59.492 00:02:59.492 ' 00:02:59.492 10:43:18 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:59.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.492 --rc genhtml_branch_coverage=1 00:02:59.492 --rc genhtml_function_coverage=1 00:02:59.492 --rc genhtml_legend=1 00:02:59.492 --rc geninfo_all_blocks=1 00:02:59.492 --rc geninfo_unexecuted_blocks=1 00:02:59.492 00:02:59.492 ' 00:02:59.492 10:43:18 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:59.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.492 --rc genhtml_branch_coverage=1 00:02:59.492 --rc genhtml_function_coverage=1 00:02:59.492 --rc genhtml_legend=1 00:02:59.492 --rc geninfo_all_blocks=1 00:02:59.492 --rc geninfo_unexecuted_blocks=1 00:02:59.492 00:02:59.492 ' 00:02:59.492 10:43:18 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:59.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.492 --rc genhtml_branch_coverage=1 00:02:59.492 --rc genhtml_function_coverage=1 00:02:59.492 --rc genhtml_legend=1 00:02:59.492 --rc geninfo_all_blocks=1 00:02:59.492 --rc geninfo_unexecuted_blocks=1 00:02:59.492 00:02:59.492 ' 00:02:59.492 10:43:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:59.492 10:43:18 -- nvmf/common.sh@7 -- # uname -s 00:02:59.492 10:43:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.492 10:43:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.492 10:43:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.492 10:43:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.492 10:43:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.492 10:43:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.492 10:43:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.492 10:43:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.492 10:43:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.492 10:43:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.492 10:43:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:59.492 10:43:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:59.492 10:43:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.492 10:43:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.492 10:43:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:59.492 10:43:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:59.492 10:43:18 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:59.492 10:43:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:59.492 10:43:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.492 10:43:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.492 10:43:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.492 10:43:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.492 10:43:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.492 10:43:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.492 10:43:18 -- paths/export.sh@5 -- # export PATH 00:02:59.492 10:43:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.492 10:43:18 -- nvmf/common.sh@51 -- # : 0 00:02:59.492 10:43:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:59.492 10:43:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:59.492 10:43:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:59.492 10:43:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.492 10:43:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.492 10:43:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:59.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:59.492 10:43:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:59.492 10:43:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:59.492 10:43:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:59.492 10:43:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.492 10:43:18 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.492 10:43:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.492 10:43:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:59.492 10:43:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.492 10:43:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.492 10:43:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.492 10:43:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.492 10:43:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.492 10:43:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:59.492 10:43:18 -- spdk/autotest.sh@48 -- # udevadm_pid=129333 00:02:59.492 10:43:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:59.492 10:43:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:59.492 10:43:18 -- pm/common@17 -- # local monitor 00:02:59.492 10:43:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.492 10:43:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.492 10:43:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.492 10:43:18 -- pm/common@21 -- # date +%s 00:02:59.492 10:43:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.492 10:43:18 -- pm/common@21 -- # date +%s 00:02:59.492 10:43:18 -- pm/common@25 -- # sleep 1 00:02:59.492 10:43:18 -- pm/common@21 -- # date +%s 00:02:59.492 10:43:18 -- pm/common@21 -- # date +%s 00:02:59.492 10:43:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663798 00:02:59.492 10:43:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663798 00:02:59.492 10:43:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663798 00:02:59.492 10:43:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663798 00:02:59.492 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663798_collect-cpu-load.pm.log 00:02:59.492 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663798_collect-vmstat.pm.log 00:02:59.492 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663798_collect-cpu-temp.pm.log 00:02:59.492 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663798_collect-bmc-pm.bmc.pm.log 00:03:00.435 10:43:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:00.435 10:43:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:00.435 10:43:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:00.435 10:43:19 -- common/autotest_common.sh@10 -- # set +x 00:03:00.435 10:43:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:00.435 10:43:19 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:00.435 10:43:19 -- common/autotest_common.sh@10 -- # set +x 00:03:00.435 10:43:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:00.435 10:43:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.435 10:43:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.435 10:43:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:00.435 10:43:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.435 10:43:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:00.435 10:43:19 -- common/autotest_common.sh@1455 -- # uname 00:03:00.435 10:43:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:00.435 10:43:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:00.435 10:43:19 -- common/autotest_common.sh@1475 -- # uname 00:03:00.435 10:43:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:00.435 10:43:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:00.435 10:43:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:00.697 lcov: LCOV version 1.15 00:03:00.697 10:43:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:27.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.280 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:31.692 10:43:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:31.692 10:43:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.692 10:43:50 -- common/autotest_common.sh@10 -- # set +x 00:03:31.692 10:43:50 -- spdk/autotest.sh@78 -- # rm -f 00:03:31.692 10:43:50 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.994 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:34.994 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:34.994 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:34.994 10:43:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:34.995 10:43:54 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:34.995 10:43:54 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:34.995 10:43:54 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:34.995 10:43:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:34.995 10:43:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:34.995 10:43:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:34.995 10:43:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.995 10:43:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:34.995 10:43:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:34.995 10:43:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.995 10:43:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:34.995 10:43:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:34.995 10:43:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:34.995 10:43:54 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.255 No valid GPT data, bailing 00:03:35.255 10:43:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.256 10:43:54 -- scripts/common.sh@394 -- # pt= 00:03:35.256 10:43:54 -- scripts/common.sh@395 -- # return 1 00:03:35.256 10:43:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.256 1+0 records in 00:03:35.256 1+0 records out 00:03:35.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00192337 s, 545 MB/s 00:03:35.256 10:43:54 -- spdk/autotest.sh@105 -- # sync 00:03:35.256 10:43:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.256 10:43:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.256 10:43:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.259 10:44:03 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.259 10:44:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.259 10:44:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.259 10:44:03 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.170 Hugepages 00:03:47.170 node hugesize free / total 00:03:47.170 node0 1048576kB 0 / 0 00:03:47.170 node0 2048kB 0 / 0 00:03:47.170 node1 1048576kB 0 / 0 00:03:47.170 node1 2048kB 0 / 0 00:03:47.170 00:03:47.170 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.170 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:47.170 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:47.170 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:47.170 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:47.170 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:47.430 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:47.430 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:47.430 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:47.430 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:47.430 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:47.430 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:47.430 10:44:06 -- spdk/autotest.sh@117 -- # uname -s 00:03:47.430 10:44:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:47.430 10:44:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:47.430 10:44:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.633 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.633 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:53.072 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:53.072 10:44:12 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:54.459 10:44:13 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:54.459 10:44:13 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:54.459 10:44:13 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.459 10:44:13 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:54.459 10:44:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:54.459 10:44:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:54.459 10:44:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.459 10:44:13 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.459 10:44:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:54.459 10:44:13 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:54.459 10:44:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:54.459 10:44:13 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.761 Waiting for block devices as requested 00:03:57.761 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:57.761 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:57.761 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:58.022 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:58.022 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:58.022 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:58.283 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:58.283 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:58.283 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:58.544 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:58.544 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:58.805 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:58.805 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:58.805 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:59.065 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:59.065 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:59.065 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:59.327 10:44:18 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:59.327 10:44:18 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:59.327 10:44:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:59.327 10:44:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:59.327 10:44:18 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:59.327 10:44:18 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:59.327 10:44:18 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:59.588 10:44:18 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:59.588 10:44:18 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:59.588 10:44:18 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:59.588 10:44:18 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:59.588 10:44:18 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:59.588 10:44:18 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:59.588 10:44:18 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:59.588 10:44:18 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:59.588 10:44:18 -- common/autotest_common.sh@1541 -- # continue 00:03:59.588 10:44:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.588 10:44:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.588 10:44:18 -- common/autotest_common.sh@10 -- # set +x 00:03:59.588 10:44:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.588 10:44:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.588 10:44:18 -- common/autotest_common.sh@10 -- # set +x 00:03:59.588 10:44:18 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.892 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:02.892 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:02.892 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:02.892 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:02.892 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.157 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:03.729 10:44:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.729 10:44:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.729 10:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.729 10:44:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.729 10:44:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:03.729 10:44:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.729 10:44:23 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:03.729 10:44:23 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:03.729 10:44:23 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:03.729 10:44:23 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.729 10:44:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:03.729 10:44:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.729 10:44:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.729 10:44:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.729 10:44:23 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.729 10:44:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.729 10:44:23 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:03.729 10:44:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:03.729 10:44:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:03.729 10:44:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:03.729 10:44:23 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:03.729 10:44:23 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:03.729 10:44:23 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:03.729 10:44:23 -- common/autotest_common.sh@1570 -- # return 0 00:04:03.729 10:44:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:03.729 10:44:23 -- common/autotest_common.sh@1578 -- # return 0 00:04:03.729 10:44:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.729 10:44:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.729 10:44:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.729 10:44:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.729 10:44:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.729 10:44:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.729 10:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:03.729 10:44:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.729 10:44:23 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.729 10:44:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.729 10:44:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.729 10:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:03.729 ************************************ 00:04:03.729 START TEST env 00:04:03.729 ************************************ 00:04:03.729 10:44:23 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.729 * Looking for test storage... 00:04:03.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.990 10:44:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.990 10:44:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.990 10:44:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.990 10:44:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.990 10:44:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.990 10:44:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.990 10:44:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.990 10:44:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.990 10:44:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.990 10:44:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.990 10:44:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.990 10:44:23 env -- scripts/common.sh@344 -- # case "$op" in 00:04:03.990 10:44:23 env -- scripts/common.sh@345 -- # : 1 00:04:03.990 10:44:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.990 10:44:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.990 10:44:23 env -- scripts/common.sh@365 -- # decimal 1 00:04:03.990 10:44:23 env -- scripts/common.sh@353 -- # local d=1 00:04:03.990 10:44:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.990 10:44:23 env -- scripts/common.sh@355 -- # echo 1 00:04:03.990 10:44:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.990 10:44:23 env -- scripts/common.sh@366 -- # decimal 2 00:04:03.990 10:44:23 env -- scripts/common.sh@353 -- # local d=2 00:04:03.990 10:44:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.990 10:44:23 env -- scripts/common.sh@355 -- # echo 2 00:04:03.990 10:44:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.990 10:44:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.990 10:44:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.990 10:44:23 env -- scripts/common.sh@368 -- # return 0 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.990 --rc genhtml_branch_coverage=1 00:04:03.990 --rc genhtml_function_coverage=1 00:04:03.990 --rc genhtml_legend=1 00:04:03.990 --rc geninfo_all_blocks=1 00:04:03.990 --rc geninfo_unexecuted_blocks=1 00:04:03.990 00:04:03.990 ' 00:04:03.990 10:44:23 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.990 --rc genhtml_branch_coverage=1 00:04:03.990 --rc genhtml_function_coverage=1 00:04:03.991 --rc genhtml_legend=1 00:04:03.991 --rc geninfo_all_blocks=1 00:04:03.991 --rc geninfo_unexecuted_blocks=1 00:04:03.991 00:04:03.991 ' 00:04:03.991 10:44:23 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.991 --rc genhtml_branch_coverage=1 00:04:03.991 --rc genhtml_function_coverage=1 00:04:03.991 --rc genhtml_legend=1 00:04:03.991 --rc geninfo_all_blocks=1 00:04:03.991 --rc geninfo_unexecuted_blocks=1 00:04:03.991 00:04:03.991 ' 00:04:03.991 10:44:23 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.991 --rc genhtml_branch_coverage=1 00:04:03.991 --rc genhtml_function_coverage=1 00:04:03.991 --rc genhtml_legend=1 00:04:03.991 --rc geninfo_all_blocks=1 00:04:03.991 --rc geninfo_unexecuted_blocks=1 00:04:03.991 00:04:03.991 ' 00:04:03.991 10:44:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.991 10:44:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.991 10:44:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.991 10:44:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.991 ************************************ 00:04:03.991 START TEST env_memory 00:04:03.991 ************************************ 00:04:03.991 10:44:23 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.991 00:04:03.991 00:04:03.991 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.991 http://cunit.sourceforge.net/ 00:04:03.991 00:04:03.991 00:04:03.991 Suite: memory 00:04:03.991 Test: alloc and free memory map ...[2024-11-15 10:44:23.456759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.991 passed 00:04:03.991 Test: mem map translation ...[2024-11-15 10:44:23.482297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.991 [2024-11-15 10:44:23.482326] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.991 [2024-11-15 10:44:23.482372] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.991 [2024-11-15 10:44:23.482379] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.252 passed 00:04:04.252 Test: mem map registration ...[2024-11-15 10:44:23.537537] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.252 [2024-11-15 10:44:23.537568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.252 passed 00:04:04.252 Test: mem map adjacent registrations ...passed 00:04:04.252 00:04:04.252 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.252 suites 1 1 n/a 0 0 00:04:04.252 tests 4 4 4 0 0 00:04:04.252 asserts 152 152 152 0 n/a 00:04:04.252 00:04:04.252 Elapsed time = 0.194 seconds 00:04:04.252 00:04:04.252 real 0m0.209s 00:04:04.252 user 0m0.192s 00:04:04.252 sys 0m0.016s 00:04:04.252 10:44:23 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.252 10:44:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.252 ************************************ 00:04:04.252 END TEST env_memory 00:04:04.252 ************************************ 00:04:04.252 10:44:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:04.252 10:44:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.252 10:44:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.252 10:44:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.252 ************************************ 00:04:04.252 START TEST env_vtophys 00:04:04.252 ************************************ 00:04:04.252 10:44:23 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:04.252 EAL: lib.eal log level changed from notice to debug 00:04:04.252 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.252 EAL: Detected lcore 1 as core 1 on socket 0 00:04:04.252 EAL: Detected lcore 2 as core 2 on socket 0 00:04:04.252 EAL: Detected lcore 3 as core 3 on socket 0 00:04:04.252 EAL: Detected lcore 4 as core 4 on socket 0 00:04:04.252 EAL: Detected lcore 5 as core 5 on socket 0 00:04:04.252 EAL: Detected lcore 6 as core 6 on socket 0 00:04:04.252 EAL: Detected lcore 7 as core 7 on socket 0 00:04:04.252 EAL: Detected lcore 8 as core 8 on socket 0 00:04:04.252 EAL: Detected lcore 9 as core 9 on socket 0 00:04:04.252 EAL: Detected lcore 10 as core 10 on socket 0 00:04:04.252 EAL: Detected lcore 11 as core 11 on socket 0 00:04:04.252 EAL: Detected lcore 12 as core 12 on socket 0 00:04:04.252 EAL: Detected lcore 13 as core 13 on socket 0 00:04:04.252 EAL: Detected lcore 14 as core 14 on socket 0 00:04:04.252 EAL: Detected lcore 15 as core 15 on socket 0 00:04:04.252 EAL: Detected lcore 16 as core 16 on socket 0 00:04:04.252 EAL: Detected lcore 17 as core 17 on socket 0 00:04:04.252 EAL: Detected lcore 18 as core 18 on socket 0 00:04:04.252 EAL: Detected lcore 19 as core 19 on socket 0 00:04:04.252 EAL: Detected lcore 20 as core 20 on socket 0 00:04:04.252 EAL: Detected lcore 21 as core 21 on socket 0 00:04:04.252 EAL: Detected lcore 22 as core 22 on socket 0 00:04:04.252 EAL: Detected lcore 23 as core 23 on socket 0 00:04:04.252 EAL: Detected lcore 24 as core 24 on socket 0 00:04:04.252 EAL: Detected lcore 25 as core 25 on socket 0 00:04:04.252 EAL: Detected lcore 26 as core 26 on socket 0 00:04:04.252 EAL: Detected lcore 27 as core 27 on socket 0 00:04:04.252 EAL: Detected lcore 28 as core 28 on socket 0 00:04:04.252 EAL: Detected lcore 29 as core 29 on socket 0 00:04:04.252 EAL: Detected lcore 30 as core 30 on socket 0 00:04:04.252 EAL: Detected lcore 31 as core 31 on socket 0 00:04:04.252 EAL: Detected lcore 32 as core 32 on socket 0 00:04:04.252 EAL: Detected lcore 33 as core 33 on socket 0 00:04:04.252 EAL: Detected lcore 34 as core 34 on socket 0 00:04:04.252 EAL: Detected lcore 35 as core 35 on socket 0 00:04:04.252 EAL: Detected lcore 36 as core 0 on socket 1 00:04:04.252 EAL: Detected lcore 37 as core 1 on socket 1 00:04:04.252 EAL: Detected lcore 38 as core 2 on socket 1 00:04:04.252 EAL: Detected lcore 39 as core 3 on socket 1 00:04:04.252 EAL: Detected lcore 40 as core 4 on socket 1 00:04:04.252 EAL: Detected lcore 41 as core 5 on socket 1 00:04:04.252 EAL: Detected lcore 42 as core 6 on socket 1 00:04:04.252 EAL: Detected lcore 43 as core 7 on socket 1 00:04:04.252 EAL: Detected lcore 44 as core 8 on socket 1 00:04:04.252 EAL: Detected lcore 45 as core 9 on socket 1 00:04:04.252 EAL: Detected lcore 46 as core 10 on socket 1 00:04:04.252 EAL: Detected lcore 47 as core 11 on socket 1 00:04:04.252 EAL: Detected lcore 48 as core 12 on socket 1 00:04:04.252 EAL: Detected lcore 49 as core 13 on socket 1 00:04:04.252 EAL: Detected lcore 50 as core 14 on socket 1 00:04:04.252 EAL: Detected lcore 51 as core 15 on socket 1 00:04:04.252 EAL: Detected lcore 52 as core 16 on socket 1 00:04:04.252 EAL: Detected lcore 53 as core 17 on socket 1 00:04:04.252 EAL: Detected lcore 54 as core 18 on socket 1 00:04:04.252 EAL: Detected lcore 55 as core 19 on socket 1 00:04:04.252 EAL: Detected lcore 56 as core 20 on socket 1 00:04:04.252 EAL: Detected lcore 57 as core 21 on socket 1 00:04:04.252 EAL: Detected lcore 58 as core 22 on socket 1 00:04:04.252 EAL: Detected lcore 59 as core 23 on socket 1 00:04:04.252 EAL: Detected lcore 60 as core 24 on socket 1 00:04:04.252 EAL: Detected lcore 61 as core 25 on socket 1 00:04:04.252 EAL: Detected lcore 62 as core 26 on socket 1 00:04:04.252 EAL: Detected lcore 63 as core 27 on socket 1 00:04:04.252 EAL: Detected lcore 64 as core 28 on socket 1 00:04:04.252 EAL: Detected lcore 65 as core 29 on socket 1 00:04:04.252 EAL: Detected lcore 66 as core 30 on socket 1 00:04:04.252 EAL: Detected lcore 67 as core 31 on socket 1 00:04:04.252 EAL: Detected lcore 68 as core 32 on socket 1 00:04:04.252 EAL: Detected lcore 69 as core 33 on socket 1 00:04:04.252 EAL: Detected lcore 70 as core 34 on socket 1 00:04:04.252 EAL: Detected lcore 71 as core 35 on socket 1 00:04:04.252 EAL: Detected lcore 72 as core 0 on socket 0 00:04:04.252 EAL: Detected lcore 73 as core 1 on socket 0 00:04:04.252 EAL: Detected lcore 74 as core 2 on socket 0 00:04:04.252 EAL: Detected lcore 75 as core 3 on socket 0 00:04:04.252 EAL: Detected lcore 76 as core 4 on socket 0 00:04:04.252 EAL: Detected lcore 77 as core 5 on socket 0 00:04:04.252 EAL: Detected lcore 78 as core 6 on socket 0 00:04:04.252 EAL: Detected lcore 79 as core 7 on socket 0 00:04:04.252 EAL: Detected lcore 80 as core 8 on socket 0 00:04:04.252 EAL: Detected lcore 81 as core 9 on socket 0 00:04:04.252 EAL: Detected lcore 82 as core 10 on socket 0 00:04:04.252 EAL: Detected lcore 83 as core 11 on socket 0 00:04:04.252 EAL: Detected lcore 84 as core 12 on socket 0 00:04:04.252 EAL: Detected lcore 85 as core 13 on socket 0 00:04:04.252 EAL: Detected lcore 86 as core 14 on socket 0 00:04:04.252 EAL: Detected lcore 87 as core 15 on socket 0 00:04:04.252 EAL: Detected lcore 88 as core 16 on socket 0 00:04:04.252 EAL: Detected lcore 89 as core 17 on socket 0 00:04:04.252 EAL: Detected lcore 90 as core 18 on socket 0 00:04:04.252 EAL: Detected lcore 91 as core 19 on socket 0 00:04:04.252 EAL: Detected lcore 92 as core 20 on socket 0 00:04:04.252 EAL: Detected lcore 93 as core 21 on socket 0 00:04:04.252 EAL: Detected lcore 94 as core 22 on socket 0 00:04:04.253 EAL: Detected lcore 95 as core 23 on socket 0 00:04:04.253 EAL: Detected lcore 96 as core 24 on socket 0 00:04:04.253 EAL: Detected lcore 97 as core 25 on socket 0 00:04:04.253 EAL: Detected lcore 98 as core 26 on socket 0 00:04:04.253 EAL: Detected lcore 99 as core 27 on socket 0 00:04:04.253 EAL: Detected lcore 100 as core 28 on socket 0 00:04:04.253 EAL: Detected lcore 101 as core 29 on socket 0 00:04:04.253 EAL: Detected lcore 102 as core 30 on socket 0 00:04:04.253 EAL: Detected lcore 103 as core 31 on socket 0 00:04:04.253 EAL: Detected lcore 104 as core 32 on socket 0 00:04:04.253 EAL: Detected lcore 105 as core 33 on socket 0 00:04:04.253 EAL: Detected lcore 106 as core 34 on socket 0 00:04:04.253 EAL: Detected lcore 107 as core 35 on socket 0 00:04:04.253 EAL: Detected lcore 108 as core 0 on socket 1 00:04:04.253 EAL: Detected lcore 109 as core 1 on socket 1 00:04:04.253 EAL: Detected lcore 110 as core 2 on socket 1 00:04:04.253 EAL: Detected lcore 111 as core 3 on socket 1 00:04:04.253 EAL: Detected lcore 112 as core 4 on socket 1 00:04:04.253 EAL: Detected lcore 113 as core 5 on socket 1 00:04:04.253 EAL: Detected lcore 114 as core 6 on socket 1 00:04:04.253 EAL: Detected lcore 115 as core 7 on socket 1 00:04:04.253 EAL: Detected lcore 116 as core 8 on socket 1 00:04:04.253 EAL: Detected lcore 117 as core 9 on socket 1 00:04:04.253 EAL: Detected lcore 118 as core 10 on socket 1 00:04:04.253 EAL: Detected lcore 119 as core 11 on socket 1 00:04:04.253 EAL: Detected lcore 120 as core 12 on socket 1 00:04:04.253 EAL: Detected lcore 121 as core 13 on socket 1 00:04:04.253 EAL: Detected lcore 122 as core 14 on socket 1 00:04:04.253 EAL: Detected lcore 123 as core 15 on socket 1 00:04:04.253 EAL: Detected lcore 124 as core 16 on socket 1 00:04:04.253 EAL: Detected lcore 125 as core 17 on socket 1 00:04:04.253 EAL: Detected lcore 126 as core 18 on socket 1 00:04:04.253 EAL: Detected lcore 127 as core 19 on socket 1 00:04:04.253 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:04.253 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:04.253 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:04.253 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:04.253 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:04.253 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:04.253 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:04.253 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:04.253 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:04.253 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:04.253 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:04.253 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:04.253 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:04.253 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:04.253 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:04.253 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:04.253 EAL: Maximum logical cores by configuration: 128 00:04:04.253 EAL: Detected CPU lcores: 128 00:04:04.253 EAL: Detected NUMA nodes: 2 00:04:04.253 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.253 EAL: Detected shared linkage of DPDK 00:04:04.253 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.253 EAL: Bus pci wants IOVA as 'DC' 00:04:04.253 EAL: Buses did not request a specific IOVA mode. 00:04:04.253 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:04.253 EAL: Selected IOVA mode 'VA' 00:04:04.253 EAL: Probing VFIO support... 00:04:04.253 EAL: IOMMU type 1 (Type 1) is supported 00:04:04.253 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:04.253 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:04.253 EAL: VFIO support initialized 00:04:04.253 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.253 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.253 EAL: Setting up physically contiguous memory... 00:04:04.253 EAL: Setting maximum number of open files to 524288 00:04:04.253 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.253 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:04.253 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.253 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:04.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.253 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:04.253 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.253 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:04.253 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:04.253 EAL: Hugepages will be freed exactly as allocated. 00:04:04.253 EAL: No shared files mode enabled, IPC is disabled 00:04:04.253 EAL: No shared files mode enabled, IPC is disabled 00:04:04.253 EAL: TSC frequency is ~2400000 KHz 00:04:04.253 EAL: Main lcore 0 is ready (tid=7f879d141a00;cpuset=[0]) 00:04:04.253 EAL: Trying to obtain current memory policy. 00:04:04.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.253 EAL: Restoring previous memory policy: 0 00:04:04.253 EAL: request: mp_malloc_sync 00:04:04.253 EAL: No shared files mode enabled, IPC is disabled 00:04:04.253 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.253 EAL: No shared files mode enabled, IPC is disabled 00:04:04.513 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.513 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.513 00:04:04.513 00:04:04.513 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.513 http://cunit.sourceforge.net/ 00:04:04.513 00:04:04.513 00:04:04.513 Suite: components_suite 00:04:04.513 Test: vtophys_malloc_test ...passed 00:04:04.513 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.513 EAL: Restoring previous memory policy: 4 00:04:04.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 4 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.034 EAL: Restoring previous memory policy: 4 00:04:05.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.034 EAL: request: mp_malloc_sync 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.034 EAL: request: mp_malloc_sync 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.034 passed 00:04:05.034 00:04:05.034 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.034 suites 1 1 n/a 0 0 00:04:05.034 tests 2 2 2 0 0 00:04:05.034 asserts 497 497 497 0 n/a 00:04:05.034 00:04:05.034 Elapsed time = 0.687 seconds 00:04:05.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.034 EAL: request: mp_malloc_sync 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 00:04:05.034 real 0m0.840s 00:04:05.034 user 0m0.443s 00:04:05.034 sys 0m0.369s 00:04:05.034 10:44:24 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.034 10:44:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.034 ************************************ 00:04:05.034 END TEST env_vtophys 00:04:05.034 ************************************ 00:04:05.295 10:44:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.295 10:44:24 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.295 10:44:24 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.295 10:44:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.295 ************************************ 00:04:05.295 START TEST env_pci 00:04:05.295 ************************************ 00:04:05.295 10:44:24 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.296 00:04:05.296 00:04:05.296 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.296 http://cunit.sourceforge.net/ 00:04:05.296 00:04:05.296 00:04:05.296 Suite: pci 00:04:05.296 Test: pci_hook ...[2024-11-15 10:44:24.626268] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 148731 has claimed it 00:04:05.296 EAL: Cannot find device (10000:00:01.0) 00:04:05.296 EAL: Failed to attach device on primary process 00:04:05.296 passed 00:04:05.296 00:04:05.296 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.296 suites 1 1 n/a 0 0 00:04:05.296 tests 1 1 1 0 0 00:04:05.296 asserts 25 25 25 0 n/a 00:04:05.296 00:04:05.296 Elapsed time = 0.031 seconds 00:04:05.296 00:04:05.296 real 0m0.053s 00:04:05.296 user 0m0.018s 00:04:05.296 sys 0m0.035s 00:04:05.296 10:44:24 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.296 10:44:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.296 ************************************ 00:04:05.296 END TEST env_pci 00:04:05.296 ************************************ 00:04:05.296 10:44:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.296 10:44:24 env -- env/env.sh@15 -- # uname 00:04:05.296 10:44:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.296 10:44:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.296 10:44:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.296 10:44:24 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:05.296 10:44:24 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.296 10:44:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.296 ************************************ 00:04:05.296 START TEST env_dpdk_post_init 00:04:05.296 ************************************ 00:04:05.296 10:44:24 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.296 EAL: Detected CPU lcores: 128 00:04:05.296 EAL: Detected NUMA nodes: 2 00:04:05.296 EAL: Detected shared linkage of DPDK 00:04:05.296 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.296 EAL: Selected IOVA mode 'VA' 00:04:05.296 EAL: VFIO support initialized 00:04:05.296 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.557 EAL: Using IOMMU type 1 (Type 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.817 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:05.817 EAL: Ignore mapping IO port bar(1) 00:04:05.817 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:06.077 EAL: Ignore mapping IO port bar(1) 00:04:06.077 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:06.337 EAL: Ignore mapping IO port bar(1) 00:04:06.337 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:06.598 EAL: Ignore mapping IO port bar(1) 00:04:06.598 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:06.860 EAL: Ignore mapping IO port bar(1) 00:04:06.860 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:06.860 EAL: Ignore mapping IO port bar(1) 00:04:07.120 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:07.120 EAL: Ignore mapping IO port bar(1) 00:04:07.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:07.381 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:07.641 EAL: Ignore mapping IO port bar(1) 00:04:07.641 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:07.901 EAL: Ignore mapping IO port bar(1) 00:04:07.901 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:08.161 EAL: Ignore mapping IO port bar(1) 00:04:08.162 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:08.422 EAL: Ignore mapping IO port bar(1) 00:04:08.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:08.422 EAL: Ignore mapping IO port bar(1) 00:04:08.684 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:08.684 EAL: Ignore mapping IO port bar(1) 00:04:08.944 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:08.944 EAL: Ignore mapping IO port bar(1) 00:04:09.206 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:09.206 EAL: Ignore mapping IO port bar(1) 00:04:09.206 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:09.206 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:09.206 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:09.467 Starting DPDK initialization... 00:04:09.467 Starting SPDK post initialization... 00:04:09.467 SPDK NVMe probe 00:04:09.467 Attaching to 0000:65:00.0 00:04:09.467 Attached to 0000:65:00.0 00:04:09.467 Cleaning up... 00:04:11.383 00:04:11.383 real 0m5.746s 00:04:11.383 user 0m0.116s 00:04:11.383 sys 0m0.188s 00:04:11.383 10:44:30 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.383 10:44:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.383 ************************************ 00:04:11.383 END TEST env_dpdk_post_init 00:04:11.383 ************************************ 00:04:11.383 10:44:30 env -- env/env.sh@26 -- # uname 00:04:11.383 10:44:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:11.383 10:44:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.383 10:44:30 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.384 10:44:30 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.384 10:44:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.384 ************************************ 00:04:11.384 START TEST env_mem_callbacks 00:04:11.384 ************************************ 00:04:11.384 10:44:30 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.384 EAL: Detected CPU lcores: 128 00:04:11.384 EAL: Detected NUMA nodes: 2 00:04:11.384 EAL: Detected shared linkage of DPDK 00:04:11.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.384 EAL: Selected IOVA mode 'VA' 00:04:11.384 EAL: VFIO support initialized 00:04:11.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.384 00:04:11.384 00:04:11.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.384 http://cunit.sourceforge.net/ 00:04:11.384 00:04:11.384 00:04:11.384 Suite: memory 00:04:11.384 Test: test ... 00:04:11.384 register 0x200000200000 2097152 00:04:11.384 malloc 3145728 00:04:11.384 register 0x200000400000 4194304 00:04:11.384 buf 0x200000500000 len 3145728 PASSED 00:04:11.384 malloc 64 00:04:11.384 buf 0x2000004fff40 len 64 PASSED 00:04:11.384 malloc 4194304 00:04:11.384 register 0x200000800000 6291456 00:04:11.384 buf 0x200000a00000 len 4194304 PASSED 00:04:11.384 free 0x200000500000 3145728 00:04:11.384 free 0x2000004fff40 64 00:04:11.384 unregister 0x200000400000 4194304 PASSED 00:04:11.384 free 0x200000a00000 4194304 00:04:11.384 unregister 0x200000800000 6291456 PASSED 00:04:11.384 malloc 8388608 00:04:11.384 register 0x200000400000 10485760 00:04:11.384 buf 0x200000600000 len 8388608 PASSED 00:04:11.384 free 0x200000600000 8388608 00:04:11.384 unregister 0x200000400000 10485760 PASSED 00:04:11.384 passed 00:04:11.384 00:04:11.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.384 suites 1 1 n/a 0 0 00:04:11.384 tests 1 1 1 0 0 00:04:11.384 asserts 15 15 15 0 n/a 00:04:11.384 00:04:11.384 Elapsed time = 0.010 seconds 00:04:11.384 00:04:11.384 real 0m0.069s 00:04:11.384 user 0m0.021s 00:04:11.384 sys 0m0.047s 00:04:11.384 10:44:30 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.384 10:44:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:11.384 ************************************ 00:04:11.384 END TEST env_mem_callbacks 00:04:11.384 ************************************ 00:04:11.384 00:04:11.384 real 0m7.527s 00:04:11.384 user 0m1.057s 00:04:11.384 sys 0m1.033s 00:04:11.384 10:44:30 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.384 10:44:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.384 ************************************ 00:04:11.384 END TEST env 00:04:11.384 ************************************ 00:04:11.384 10:44:30 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.384 10:44:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.384 10:44:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.384 10:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:11.384 ************************************ 00:04:11.384 START TEST rpc 00:04:11.384 ************************************ 00:04:11.384 10:44:30 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.384 * Looking for test storage... 00:04:11.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.384 10:44:30 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.384 10:44:30 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.384 10:44:30 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.645 10:44:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.645 10:44:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.645 10:44:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.645 10:44:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.645 10:44:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.645 10:44:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.645 10:44:30 rpc -- scripts/common.sh@345 -- # : 1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.645 10:44:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.645 10:44:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.645 10:44:30 rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.645 10:44:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.645 10:44:30 rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.645 10:44:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.645 10:44:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.645 10:44:30 rpc -- scripts/common.sh@368 -- # return 0 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.645 --rc genhtml_branch_coverage=1 00:04:11.645 --rc genhtml_function_coverage=1 00:04:11.645 --rc genhtml_legend=1 00:04:11.645 --rc geninfo_all_blocks=1 00:04:11.645 --rc geninfo_unexecuted_blocks=1 00:04:11.645 00:04:11.645 ' 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.645 --rc genhtml_branch_coverage=1 00:04:11.645 --rc genhtml_function_coverage=1 00:04:11.645 --rc genhtml_legend=1 00:04:11.645 --rc geninfo_all_blocks=1 00:04:11.645 --rc geninfo_unexecuted_blocks=1 00:04:11.645 00:04:11.645 ' 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.645 --rc genhtml_branch_coverage=1 00:04:11.645 --rc genhtml_function_coverage=1 00:04:11.645 --rc genhtml_legend=1 00:04:11.645 --rc geninfo_all_blocks=1 00:04:11.645 --rc geninfo_unexecuted_blocks=1 00:04:11.645 00:04:11.645 ' 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.645 --rc genhtml_branch_coverage=1 00:04:11.645 --rc genhtml_function_coverage=1 00:04:11.645 --rc genhtml_legend=1 00:04:11.645 --rc geninfo_all_blocks=1 00:04:11.645 --rc geninfo_unexecuted_blocks=1 00:04:11.645 00:04:11.645 ' 00:04:11.645 10:44:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=150072 00:04:11.645 10:44:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.645 10:44:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 150072 00:04:11.645 10:44:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@833 -- # '[' -z 150072 ']' 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.645 10:44:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.645 [2024-11-15 10:44:31.037140] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:11.645 [2024-11-15 10:44:31.037205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150072 ] 00:04:11.645 [2024-11-15 10:44:31.128428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.906 [2024-11-15 10:44:31.181367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.907 [2024-11-15 10:44:31.181423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 150072' to capture a snapshot of events at runtime. 00:04:11.907 [2024-11-15 10:44:31.181433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:11.907 [2024-11-15 10:44:31.181440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:11.907 [2024-11-15 10:44:31.181447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid150072 for offline analysis/debug. 00:04:11.907 [2024-11-15 10:44:31.182208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.480 10:44:31 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.480 10:44:31 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:12.480 10:44:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.480 10:44:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.480 10:44:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:12.480 10:44:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:12.480 10:44:31 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.480 10:44:31 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.480 10:44:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.480 ************************************ 00:04:12.480 START TEST rpc_integrity 00:04:12.480 ************************************ 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.480 10:44:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.480 { 00:04:12.480 "name": "Malloc0", 00:04:12.480 "aliases": [ 00:04:12.480 "1f1424e1-a5d8-4d42-9405-a3c7994e38b5" 00:04:12.480 ], 00:04:12.480 "product_name": "Malloc disk", 00:04:12.480 "block_size": 512, 00:04:12.480 "num_blocks": 16384, 00:04:12.480 "uuid": "1f1424e1-a5d8-4d42-9405-a3c7994e38b5", 00:04:12.480 "assigned_rate_limits": { 00:04:12.480 "rw_ios_per_sec": 0, 00:04:12.480 "rw_mbytes_per_sec": 0, 00:04:12.480 "r_mbytes_per_sec": 0, 00:04:12.480 "w_mbytes_per_sec": 0 00:04:12.480 }, 00:04:12.480 "claimed": false, 00:04:12.480 "zoned": false, 00:04:12.480 "supported_io_types": { 00:04:12.480 "read": true, 00:04:12.480 "write": true, 00:04:12.480 "unmap": true, 00:04:12.480 "flush": true, 00:04:12.480 "reset": true, 00:04:12.480 "nvme_admin": false, 00:04:12.480 "nvme_io": false, 00:04:12.480 "nvme_io_md": false, 00:04:12.480 "write_zeroes": true, 00:04:12.480 "zcopy": true, 00:04:12.480 "get_zone_info": false, 00:04:12.480 "zone_management": false, 00:04:12.480 "zone_append": false, 00:04:12.480 "compare": false, 00:04:12.480 "compare_and_write": false, 00:04:12.480 "abort": true, 00:04:12.480 "seek_hole": false, 00:04:12.480 "seek_data": false, 00:04:12.480 "copy": true, 00:04:12.480 "nvme_iov_md": false 00:04:12.480 }, 00:04:12.480 "memory_domains": [ 00:04:12.480 { 00:04:12.480 "dma_device_id": "system", 00:04:12.480 "dma_device_type": 1 00:04:12.480 }, 00:04:12.480 { 00:04:12.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.480 "dma_device_type": 2 00:04:12.480 } 00:04:12.480 ], 00:04:12.480 "driver_specific": {} 00:04:12.480 } 00:04:12.480 ]' 00:04:12.480 10:44:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:12.832 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.832 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:12.832 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.832 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.832 [2024-11-15 10:44:32.019199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:12.832 [2024-11-15 10:44:32.019246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.832 [2024-11-15 10:44:32.019263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2061800 00:04:12.832 [2024-11-15 10:44:32.019271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.832 [2024-11-15 10:44:32.020802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.832 [2024-11-15 10:44:32.020838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.832 Passthru0 00:04:12.832 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.832 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.832 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.832 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.832 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.832 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.832 { 00:04:12.832 "name": "Malloc0", 00:04:12.832 "aliases": [ 00:04:12.832 "1f1424e1-a5d8-4d42-9405-a3c7994e38b5" 00:04:12.832 ], 00:04:12.832 "product_name": "Malloc disk", 00:04:12.832 "block_size": 512, 00:04:12.832 "num_blocks": 16384, 00:04:12.832 "uuid": "1f1424e1-a5d8-4d42-9405-a3c7994e38b5", 00:04:12.832 "assigned_rate_limits": { 00:04:12.832 "rw_ios_per_sec": 0, 00:04:12.832 "rw_mbytes_per_sec": 0, 00:04:12.832 "r_mbytes_per_sec": 0, 00:04:12.832 "w_mbytes_per_sec": 0 00:04:12.832 }, 00:04:12.832 "claimed": true, 00:04:12.832 "claim_type": "exclusive_write", 00:04:12.832 "zoned": false, 00:04:12.832 "supported_io_types": { 00:04:12.832 "read": true, 00:04:12.832 "write": true, 00:04:12.832 "unmap": true, 00:04:12.832 "flush": true, 00:04:12.832 "reset": true, 00:04:12.832 "nvme_admin": false, 00:04:12.832 "nvme_io": false, 00:04:12.832 "nvme_io_md": false, 00:04:12.832 "write_zeroes": true, 00:04:12.832 "zcopy": true, 00:04:12.832 "get_zone_info": false, 00:04:12.832 "zone_management": false, 00:04:12.832 "zone_append": false, 00:04:12.832 "compare": false, 00:04:12.832 "compare_and_write": false, 00:04:12.832 "abort": true, 00:04:12.832 "seek_hole": false, 00:04:12.832 "seek_data": false, 00:04:12.832 "copy": true, 00:04:12.832 "nvme_iov_md": false 00:04:12.832 }, 00:04:12.832 "memory_domains": [ 00:04:12.832 { 00:04:12.832 "dma_device_id": "system", 00:04:12.832 "dma_device_type": 1 00:04:12.832 }, 00:04:12.832 { 00:04:12.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.832 "dma_device_type": 2 00:04:12.832 } 00:04:12.832 ], 00:04:12.832 "driver_specific": {} 00:04:12.832 }, 00:04:12.832 { 00:04:12.832 "name": "Passthru0", 00:04:12.832 "aliases": [ 00:04:12.832 "520b59f7-bb5a-508f-960a-aabb30d885eb" 00:04:12.832 ], 00:04:12.832 "product_name": "passthru", 00:04:12.832 "block_size": 512, 00:04:12.832 "num_blocks": 16384, 00:04:12.832 "uuid": "520b59f7-bb5a-508f-960a-aabb30d885eb", 00:04:12.832 "assigned_rate_limits": { 00:04:12.832 "rw_ios_per_sec": 0, 00:04:12.832 "rw_mbytes_per_sec": 0, 00:04:12.832 "r_mbytes_per_sec": 0, 00:04:12.832 "w_mbytes_per_sec": 0 00:04:12.832 }, 00:04:12.832 "claimed": false, 00:04:12.832 "zoned": false, 00:04:12.832 "supported_io_types": { 00:04:12.832 "read": true, 00:04:12.833 "write": true, 00:04:12.833 "unmap": true, 00:04:12.833 "flush": true, 00:04:12.833 "reset": true, 00:04:12.833 "nvme_admin": false, 00:04:12.833 "nvme_io": false, 00:04:12.833 "nvme_io_md": false, 00:04:12.833 "write_zeroes": true, 00:04:12.833 "zcopy": true, 00:04:12.833 "get_zone_info": false, 00:04:12.833 "zone_management": false, 00:04:12.833 "zone_append": false, 00:04:12.833 "compare": false, 00:04:12.833 "compare_and_write": false, 00:04:12.833 "abort": true, 00:04:12.833 "seek_hole": false, 00:04:12.833 "seek_data": false, 00:04:12.833 "copy": true, 00:04:12.833 "nvme_iov_md": false 00:04:12.833 }, 00:04:12.833 "memory_domains": [ 00:04:12.833 { 00:04:12.833 "dma_device_id": "system", 00:04:12.833 "dma_device_type": 1 00:04:12.833 }, 00:04:12.833 { 00:04:12.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.833 "dma_device_type": 2 00:04:12.833 } 00:04:12.833 ], 00:04:12.833 "driver_specific": { 00:04:12.833 "passthru": { 00:04:12.833 "name": "Passthru0", 00:04:12.833 "base_bdev_name": "Malloc0" 00:04:12.833 } 00:04:12.833 } 00:04:12.833 } 00:04:12.833 ]' 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:12.833 10:44:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.833 00:04:12.833 real 0m0.290s 00:04:12.833 user 0m0.178s 00:04:12.833 sys 0m0.049s 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 ************************************ 00:04:12.833 END TEST rpc_integrity 00:04:12.833 ************************************ 00:04:12.833 10:44:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:12.833 10:44:32 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.833 10:44:32 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.833 10:44:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 ************************************ 00:04:12.833 START TEST rpc_plugins 00:04:12.833 ************************************ 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:12.833 { 00:04:12.833 "name": "Malloc1", 00:04:12.833 "aliases": [ 00:04:12.833 "5bd82867-423a-4681-aa54-a07f16c2aa9f" 00:04:12.833 ], 00:04:12.833 "product_name": "Malloc disk", 00:04:12.833 "block_size": 4096, 00:04:12.833 "num_blocks": 256, 00:04:12.833 "uuid": "5bd82867-423a-4681-aa54-a07f16c2aa9f", 00:04:12.833 "assigned_rate_limits": { 00:04:12.833 "rw_ios_per_sec": 0, 00:04:12.833 "rw_mbytes_per_sec": 0, 00:04:12.833 "r_mbytes_per_sec": 0, 00:04:12.833 "w_mbytes_per_sec": 0 00:04:12.833 }, 00:04:12.833 "claimed": false, 00:04:12.833 "zoned": false, 00:04:12.833 "supported_io_types": { 00:04:12.833 "read": true, 00:04:12.833 "write": true, 00:04:12.833 "unmap": true, 00:04:12.833 "flush": true, 00:04:12.833 "reset": true, 00:04:12.833 "nvme_admin": false, 00:04:12.833 "nvme_io": false, 00:04:12.833 "nvme_io_md": false, 00:04:12.833 "write_zeroes": true, 00:04:12.833 "zcopy": true, 00:04:12.833 "get_zone_info": false, 00:04:12.833 "zone_management": false, 00:04:12.833 "zone_append": false, 00:04:12.833 "compare": false, 00:04:12.833 "compare_and_write": false, 00:04:12.833 "abort": true, 00:04:12.833 "seek_hole": false, 00:04:12.833 "seek_data": false, 00:04:12.833 "copy": true, 00:04:12.833 "nvme_iov_md": false 00:04:12.833 }, 00:04:12.833 "memory_domains": [ 00:04:12.833 { 00:04:12.833 "dma_device_id": "system", 00:04:12.833 "dma_device_type": 1 00:04:12.833 }, 00:04:12.833 { 00:04:12.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.833 "dma_device_type": 2 00:04:12.833 } 00:04:12.833 ], 00:04:12.833 "driver_specific": {} 00:04:12.833 } 00:04:12.833 ]' 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.833 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.833 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.094 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.094 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.094 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.094 10:44:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.094 00:04:13.094 real 0m0.155s 00:04:13.094 user 0m0.096s 00:04:13.094 sys 0m0.020s 00:04:13.094 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.094 10:44:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.094 ************************************ 00:04:13.094 END TEST rpc_plugins 00:04:13.094 ************************************ 00:04:13.094 10:44:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.094 10:44:32 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.094 10:44:32 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.094 10:44:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.094 ************************************ 00:04:13.094 START TEST rpc_trace_cmd_test 00:04:13.094 ************************************ 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.094 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:13.094 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid150072", 00:04:13.094 "tpoint_group_mask": "0x8", 00:04:13.094 "iscsi_conn": { 00:04:13.094 "mask": "0x2", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "scsi": { 00:04:13.094 "mask": "0x4", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "bdev": { 00:04:13.094 "mask": "0x8", 00:04:13.094 "tpoint_mask": "0xffffffffffffffff" 00:04:13.094 }, 00:04:13.094 "nvmf_rdma": { 00:04:13.094 "mask": "0x10", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "nvmf_tcp": { 00:04:13.094 "mask": "0x20", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "ftl": { 00:04:13.094 "mask": "0x40", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "blobfs": { 00:04:13.094 "mask": "0x80", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "dsa": { 00:04:13.094 "mask": "0x200", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "thread": { 00:04:13.094 "mask": "0x400", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "nvme_pcie": { 00:04:13.094 "mask": "0x800", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "iaa": { 00:04:13.094 "mask": "0x1000", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "nvme_tcp": { 00:04:13.094 "mask": "0x2000", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "bdev_nvme": { 00:04:13.094 "mask": "0x4000", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "sock": { 00:04:13.094 "mask": "0x8000", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "blob": { 00:04:13.094 "mask": "0x10000", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "bdev_raid": { 00:04:13.094 "mask": "0x20000", 00:04:13.094 "tpoint_mask": "0x0" 00:04:13.094 }, 00:04:13.094 "scheduler": { 00:04:13.095 "mask": "0x40000", 00:04:13.095 "tpoint_mask": "0x0" 00:04:13.095 } 00:04:13.095 }' 00:04:13.095 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:13.095 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:13.095 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:13.095 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:13.095 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:13.356 00:04:13.356 real 0m0.233s 00:04:13.356 user 0m0.182s 00:04:13.356 sys 0m0.043s 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.356 10:44:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.356 ************************************ 00:04:13.356 END TEST rpc_trace_cmd_test 00:04:13.356 ************************************ 00:04:13.356 10:44:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:13.356 10:44:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:13.356 10:44:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:13.356 10:44:32 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.356 10:44:32 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.356 10:44:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.356 ************************************ 00:04:13.356 START TEST rpc_daemon_integrity 00:04:13.356 ************************************ 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.356 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.619 { 00:04:13.619 "name": "Malloc2", 00:04:13.619 "aliases": [ 00:04:13.619 "ca42d9f4-4dec-4b48-8afd-e69e7abe0e51" 00:04:13.619 ], 00:04:13.619 "product_name": "Malloc disk", 00:04:13.619 "block_size": 512, 00:04:13.619 "num_blocks": 16384, 00:04:13.619 "uuid": "ca42d9f4-4dec-4b48-8afd-e69e7abe0e51", 00:04:13.619 "assigned_rate_limits": { 00:04:13.619 "rw_ios_per_sec": 0, 00:04:13.619 "rw_mbytes_per_sec": 0, 00:04:13.619 "r_mbytes_per_sec": 0, 00:04:13.619 "w_mbytes_per_sec": 0 00:04:13.619 }, 00:04:13.619 "claimed": false, 00:04:13.619 "zoned": false, 00:04:13.619 "supported_io_types": { 00:04:13.619 "read": true, 00:04:13.619 "write": true, 00:04:13.619 "unmap": true, 00:04:13.619 "flush": true, 00:04:13.619 "reset": true, 00:04:13.619 "nvme_admin": false, 00:04:13.619 "nvme_io": false, 00:04:13.619 "nvme_io_md": false, 00:04:13.619 "write_zeroes": true, 00:04:13.619 "zcopy": true, 00:04:13.619 "get_zone_info": false, 00:04:13.619 "zone_management": false, 00:04:13.619 "zone_append": false, 00:04:13.619 "compare": false, 00:04:13.619 "compare_and_write": false, 00:04:13.619 "abort": true, 00:04:13.619 "seek_hole": false, 00:04:13.619 "seek_data": false, 00:04:13.619 "copy": true, 00:04:13.619 "nvme_iov_md": false 00:04:13.619 }, 00:04:13.619 "memory_domains": [ 00:04:13.619 { 00:04:13.619 "dma_device_id": "system", 00:04:13.619 "dma_device_type": 1 00:04:13.619 }, 00:04:13.619 { 00:04:13.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.619 "dma_device_type": 2 00:04:13.619 } 00:04:13.619 ], 00:04:13.619 "driver_specific": {} 00:04:13.619 } 00:04:13.619 ]' 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.619 [2024-11-15 10:44:32.941701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:13.619 [2024-11-15 10:44:32.941744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.619 [2024-11-15 10:44:32.941760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1e920 00:04:13.619 [2024-11-15 10:44:32.941768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.619 [2024-11-15 10:44:32.943303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.619 [2024-11-15 10:44:32.943338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.619 Passthru0 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.619 { 00:04:13.619 "name": "Malloc2", 00:04:13.619 "aliases": [ 00:04:13.619 "ca42d9f4-4dec-4b48-8afd-e69e7abe0e51" 00:04:13.619 ], 00:04:13.619 "product_name": "Malloc disk", 00:04:13.619 "block_size": 512, 00:04:13.619 "num_blocks": 16384, 00:04:13.619 "uuid": "ca42d9f4-4dec-4b48-8afd-e69e7abe0e51", 00:04:13.619 "assigned_rate_limits": { 00:04:13.619 "rw_ios_per_sec": 0, 00:04:13.619 "rw_mbytes_per_sec": 0, 00:04:13.619 "r_mbytes_per_sec": 0, 00:04:13.619 "w_mbytes_per_sec": 0 00:04:13.619 }, 00:04:13.619 "claimed": true, 00:04:13.619 "claim_type": "exclusive_write", 00:04:13.619 "zoned": false, 00:04:13.619 "supported_io_types": { 00:04:13.619 "read": true, 00:04:13.619 "write": true, 00:04:13.619 "unmap": true, 00:04:13.619 "flush": true, 00:04:13.619 "reset": true, 00:04:13.619 "nvme_admin": false, 00:04:13.619 "nvme_io": false, 00:04:13.619 "nvme_io_md": false, 00:04:13.619 "write_zeroes": true, 00:04:13.619 "zcopy": true, 00:04:13.619 "get_zone_info": false, 00:04:13.619 "zone_management": false, 00:04:13.619 "zone_append": false, 00:04:13.619 "compare": false, 00:04:13.619 "compare_and_write": false, 00:04:13.619 "abort": true, 00:04:13.619 "seek_hole": false, 00:04:13.619 "seek_data": false, 00:04:13.619 "copy": true, 00:04:13.619 "nvme_iov_md": false 00:04:13.619 }, 00:04:13.619 "memory_domains": [ 00:04:13.619 { 00:04:13.619 "dma_device_id": "system", 00:04:13.619 "dma_device_type": 1 00:04:13.619 }, 00:04:13.619 { 00:04:13.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.619 "dma_device_type": 2 00:04:13.619 } 00:04:13.619 ], 00:04:13.619 "driver_specific": {} 00:04:13.619 }, 00:04:13.619 { 00:04:13.619 "name": "Passthru0", 00:04:13.619 "aliases": [ 00:04:13.619 "0df15779-9870-5883-ae83-f69c45dc94d6" 00:04:13.619 ], 00:04:13.619 "product_name": "passthru", 00:04:13.619 "block_size": 512, 00:04:13.619 "num_blocks": 16384, 00:04:13.619 "uuid": "0df15779-9870-5883-ae83-f69c45dc94d6", 00:04:13.619 "assigned_rate_limits": { 00:04:13.619 "rw_ios_per_sec": 0, 00:04:13.619 "rw_mbytes_per_sec": 0, 00:04:13.619 "r_mbytes_per_sec": 0, 00:04:13.619 "w_mbytes_per_sec": 0 00:04:13.619 }, 00:04:13.619 "claimed": false, 00:04:13.619 "zoned": false, 00:04:13.619 "supported_io_types": { 00:04:13.619 "read": true, 00:04:13.619 "write": true, 00:04:13.619 "unmap": true, 00:04:13.619 "flush": true, 00:04:13.619 "reset": true, 00:04:13.619 "nvme_admin": false, 00:04:13.619 "nvme_io": false, 00:04:13.619 "nvme_io_md": false, 00:04:13.619 "write_zeroes": true, 00:04:13.619 "zcopy": true, 00:04:13.619 "get_zone_info": false, 00:04:13.619 "zone_management": false, 00:04:13.619 "zone_append": false, 00:04:13.619 "compare": false, 00:04:13.619 "compare_and_write": false, 00:04:13.619 "abort": true, 00:04:13.619 "seek_hole": false, 00:04:13.619 "seek_data": false, 00:04:13.619 "copy": true, 00:04:13.619 "nvme_iov_md": false 00:04:13.619 }, 00:04:13.619 "memory_domains": [ 00:04:13.619 { 00:04:13.619 "dma_device_id": "system", 00:04:13.619 "dma_device_type": 1 00:04:13.619 }, 00:04:13.619 { 00:04:13.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.619 "dma_device_type": 2 00:04:13.619 } 00:04:13.619 ], 00:04:13.619 "driver_specific": { 00:04:13.619 "passthru": { 00:04:13.619 "name": "Passthru0", 00:04:13.619 "base_bdev_name": "Malloc2" 00:04:13.619 } 00:04:13.619 } 00:04:13.619 } 00:04:13.619 ]' 00:04:13.619 10:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.619 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.620 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.620 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.620 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.620 10:44:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.620 00:04:13.620 real 0m0.301s 00:04:13.620 user 0m0.184s 00:04:13.620 sys 0m0.047s 00:04:13.620 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.620 10:44:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.620 ************************************ 00:04:13.620 END TEST rpc_daemon_integrity 00:04:13.620 ************************************ 00:04:13.620 10:44:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:13.620 10:44:33 rpc -- rpc/rpc.sh@84 -- # killprocess 150072 00:04:13.620 10:44:33 rpc -- common/autotest_common.sh@952 -- # '[' -z 150072 ']' 00:04:13.620 10:44:33 rpc -- common/autotest_common.sh@956 -- # kill -0 150072 00:04:13.620 10:44:33 rpc -- common/autotest_common.sh@957 -- # uname 00:04:13.620 10:44:33 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:13.881 10:44:33 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 150072 00:04:13.881 10:44:33 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:13.881 10:44:33 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:13.881 10:44:33 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 150072' 00:04:13.881 killing process with pid 150072 00:04:13.881 10:44:33 rpc -- common/autotest_common.sh@971 -- # kill 150072 00:04:13.881 10:44:33 rpc -- common/autotest_common.sh@976 -- # wait 150072 00:04:14.143 00:04:14.143 real 0m2.672s 00:04:14.143 user 0m3.390s 00:04:14.143 sys 0m0.831s 00:04:14.143 10:44:33 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.143 10:44:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.143 ************************************ 00:04:14.143 END TEST rpc 00:04:14.143 ************************************ 00:04:14.143 10:44:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.143 10:44:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.143 10:44:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.143 10:44:33 -- common/autotest_common.sh@10 -- # set +x 00:04:14.143 ************************************ 00:04:14.143 START TEST skip_rpc 00:04:14.143 ************************************ 00:04:14.143 10:44:33 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.143 * Looking for test storage... 00:04:14.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.143 10:44:33 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.143 10:44:33 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.143 10:44:33 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.404 10:44:33 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.404 10:44:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:14.404 10:44:33 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.404 10:44:33 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.404 --rc genhtml_branch_coverage=1 00:04:14.404 --rc genhtml_function_coverage=1 00:04:14.404 --rc genhtml_legend=1 00:04:14.404 --rc geninfo_all_blocks=1 00:04:14.404 --rc geninfo_unexecuted_blocks=1 00:04:14.404 00:04:14.404 ' 00:04:14.404 10:44:33 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.404 --rc genhtml_branch_coverage=1 00:04:14.404 --rc genhtml_function_coverage=1 00:04:14.404 --rc genhtml_legend=1 00:04:14.404 --rc geninfo_all_blocks=1 00:04:14.404 --rc geninfo_unexecuted_blocks=1 00:04:14.404 00:04:14.404 ' 00:04:14.404 10:44:33 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.404 --rc genhtml_branch_coverage=1 00:04:14.404 --rc genhtml_function_coverage=1 00:04:14.404 --rc genhtml_legend=1 00:04:14.404 --rc geninfo_all_blocks=1 00:04:14.404 --rc geninfo_unexecuted_blocks=1 00:04:14.404 00:04:14.404 ' 00:04:14.404 10:44:33 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.404 --rc genhtml_branch_coverage=1 00:04:14.404 --rc genhtml_function_coverage=1 00:04:14.404 --rc genhtml_legend=1 00:04:14.404 --rc geninfo_all_blocks=1 00:04:14.404 --rc geninfo_unexecuted_blocks=1 00:04:14.404 00:04:14.404 ' 00:04:14.405 10:44:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.405 10:44:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.405 10:44:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.405 10:44:33 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.405 10:44:33 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.405 10:44:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.405 ************************************ 00:04:14.405 START TEST skip_rpc 00:04:14.405 ************************************ 00:04:14.405 10:44:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:14.405 10:44:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=150923 00:04:14.405 10:44:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.405 10:44:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.405 10:44:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.405 [2024-11-15 10:44:33.830201] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:14.405 [2024-11-15 10:44:33.830258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150923 ] 00:04:14.405 [2024-11-15 10:44:33.924049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.666 [2024-11-15 10:44:33.976375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 150923 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 150923 ']' 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 150923 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 150923 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 150923' 00:04:19.956 killing process with pid 150923 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 150923 00:04:19.956 10:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 150923 00:04:19.956 00:04:19.956 real 0m5.264s 00:04:19.956 user 0m5.020s 00:04:19.956 sys 0m0.292s 00:04:19.956 10:44:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.956 10:44:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.956 ************************************ 00:04:19.956 END TEST skip_rpc 00:04:19.956 ************************************ 00:04:19.956 10:44:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:19.956 10:44:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.956 10:44:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.956 10:44:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.956 ************************************ 00:04:19.956 START TEST skip_rpc_with_json 00:04:19.956 ************************************ 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=151963 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 151963 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 151963 ']' 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:19.956 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.956 [2024-11-15 10:44:39.166766] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:19.956 [2024-11-15 10:44:39.166813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151963 ] 00:04:19.956 [2024-11-15 10:44:39.248553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.956 [2024-11-15 10:44:39.278211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.527 [2024-11-15 10:44:39.963743] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:20.527 request: 00:04:20.527 { 00:04:20.527 "trtype": "tcp", 00:04:20.527 "method": "nvmf_get_transports", 00:04:20.527 "req_id": 1 00:04:20.527 } 00:04:20.527 Got JSON-RPC error response 00:04:20.527 response: 00:04:20.527 { 00:04:20.527 "code": -19, 00:04:20.527 "message": "No such device" 00:04:20.527 } 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.527 [2024-11-15 10:44:39.975838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.527 10:44:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.788 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.788 10:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.788 { 00:04:20.788 "subsystems": [ 00:04:20.788 { 00:04:20.788 "subsystem": "fsdev", 00:04:20.788 "config": [ 00:04:20.788 { 00:04:20.788 "method": "fsdev_set_opts", 00:04:20.788 "params": { 00:04:20.788 "fsdev_io_pool_size": 65535, 00:04:20.788 "fsdev_io_cache_size": 256 00:04:20.788 } 00:04:20.788 } 00:04:20.788 ] 00:04:20.788 }, 00:04:20.788 { 00:04:20.788 "subsystem": "vfio_user_target", 00:04:20.788 "config": null 00:04:20.788 }, 00:04:20.788 { 00:04:20.788 "subsystem": "keyring", 00:04:20.788 "config": [] 00:04:20.788 }, 00:04:20.788 { 00:04:20.788 "subsystem": "iobuf", 00:04:20.788 "config": [ 00:04:20.788 { 00:04:20.788 "method": "iobuf_set_options", 00:04:20.788 "params": { 00:04:20.788 "small_pool_count": 8192, 00:04:20.788 "large_pool_count": 1024, 00:04:20.788 "small_bufsize": 8192, 00:04:20.788 "large_bufsize": 135168, 00:04:20.788 "enable_numa": false 00:04:20.788 } 00:04:20.788 } 00:04:20.788 ] 00:04:20.788 }, 00:04:20.788 { 00:04:20.788 "subsystem": "sock", 00:04:20.788 "config": [ 00:04:20.788 { 00:04:20.788 "method": "sock_set_default_impl", 00:04:20.788 "params": { 00:04:20.788 "impl_name": "posix" 00:04:20.788 } 00:04:20.788 }, 00:04:20.788 { 00:04:20.788 "method": "sock_impl_set_options", 00:04:20.788 "params": { 00:04:20.788 "impl_name": "ssl", 00:04:20.788 "recv_buf_size": 4096, 00:04:20.788 "send_buf_size": 4096, 00:04:20.788 "enable_recv_pipe": true, 00:04:20.788 "enable_quickack": false, 00:04:20.788 "enable_placement_id": 0, 00:04:20.788 "enable_zerocopy_send_server": true, 00:04:20.788 "enable_zerocopy_send_client": false, 00:04:20.788 "zerocopy_threshold": 0, 00:04:20.788 "tls_version": 0, 00:04:20.788 "enable_ktls": false 00:04:20.788 } 00:04:20.788 }, 00:04:20.788 { 00:04:20.788 "method": "sock_impl_set_options", 00:04:20.788 "params": { 00:04:20.788 "impl_name": "posix", 00:04:20.788 "recv_buf_size": 2097152, 00:04:20.788 "send_buf_size": 2097152, 00:04:20.788 "enable_recv_pipe": true, 00:04:20.788 "enable_quickack": false, 00:04:20.788 "enable_placement_id": 0, 00:04:20.788 "enable_zerocopy_send_server": true, 00:04:20.788 "enable_zerocopy_send_client": false, 00:04:20.788 "zerocopy_threshold": 0, 00:04:20.788 "tls_version": 0, 00:04:20.788 "enable_ktls": false 00:04:20.789 } 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "vmd", 00:04:20.789 "config": [] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "accel", 00:04:20.789 "config": [ 00:04:20.789 { 00:04:20.789 "method": "accel_set_options", 00:04:20.789 "params": { 00:04:20.789 "small_cache_size": 128, 00:04:20.789 "large_cache_size": 16, 00:04:20.789 "task_count": 2048, 00:04:20.789 "sequence_count": 2048, 00:04:20.789 "buf_count": 2048 00:04:20.789 } 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "bdev", 00:04:20.789 "config": [ 00:04:20.789 { 00:04:20.789 "method": "bdev_set_options", 00:04:20.789 "params": { 00:04:20.789 "bdev_io_pool_size": 65535, 00:04:20.789 "bdev_io_cache_size": 256, 00:04:20.789 "bdev_auto_examine": true, 00:04:20.789 "iobuf_small_cache_size": 128, 00:04:20.789 "iobuf_large_cache_size": 16 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "bdev_raid_set_options", 00:04:20.789 "params": { 00:04:20.789 "process_window_size_kb": 1024, 00:04:20.789 "process_max_bandwidth_mb_sec": 0 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "bdev_iscsi_set_options", 00:04:20.789 "params": { 00:04:20.789 "timeout_sec": 30 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "bdev_nvme_set_options", 00:04:20.789 "params": { 00:04:20.789 "action_on_timeout": "none", 00:04:20.789 "timeout_us": 0, 00:04:20.789 "timeout_admin_us": 0, 00:04:20.789 "keep_alive_timeout_ms": 10000, 00:04:20.789 "arbitration_burst": 0, 00:04:20.789 "low_priority_weight": 0, 00:04:20.789 "medium_priority_weight": 0, 00:04:20.789 "high_priority_weight": 0, 00:04:20.789 "nvme_adminq_poll_period_us": 10000, 00:04:20.789 "nvme_ioq_poll_period_us": 0, 00:04:20.789 "io_queue_requests": 0, 00:04:20.789 "delay_cmd_submit": true, 00:04:20.789 "transport_retry_count": 4, 00:04:20.789 "bdev_retry_count": 3, 00:04:20.789 "transport_ack_timeout": 0, 00:04:20.789 "ctrlr_loss_timeout_sec": 0, 00:04:20.789 "reconnect_delay_sec": 0, 00:04:20.789 "fast_io_fail_timeout_sec": 0, 00:04:20.789 "disable_auto_failback": false, 00:04:20.789 "generate_uuids": false, 00:04:20.789 "transport_tos": 0, 00:04:20.789 "nvme_error_stat": false, 00:04:20.789 "rdma_srq_size": 0, 00:04:20.789 "io_path_stat": false, 00:04:20.789 "allow_accel_sequence": false, 00:04:20.789 "rdma_max_cq_size": 0, 00:04:20.789 "rdma_cm_event_timeout_ms": 0, 00:04:20.789 "dhchap_digests": [ 00:04:20.789 "sha256", 00:04:20.789 "sha384", 00:04:20.789 "sha512" 00:04:20.789 ], 00:04:20.789 "dhchap_dhgroups": [ 00:04:20.789 "null", 00:04:20.789 "ffdhe2048", 00:04:20.789 "ffdhe3072", 00:04:20.789 "ffdhe4096", 00:04:20.789 "ffdhe6144", 00:04:20.789 "ffdhe8192" 00:04:20.789 ] 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "bdev_nvme_set_hotplug", 00:04:20.789 "params": { 00:04:20.789 "period_us": 100000, 00:04:20.789 "enable": false 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "bdev_wait_for_examine" 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "scsi", 00:04:20.789 "config": null 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "scheduler", 00:04:20.789 "config": [ 00:04:20.789 { 00:04:20.789 "method": "framework_set_scheduler", 00:04:20.789 "params": { 00:04:20.789 "name": "static" 00:04:20.789 } 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "vhost_scsi", 00:04:20.789 "config": [] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "vhost_blk", 00:04:20.789 "config": [] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "ublk", 00:04:20.789 "config": [] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "nbd", 00:04:20.789 "config": [] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "nvmf", 00:04:20.789 "config": [ 00:04:20.789 { 00:04:20.789 "method": "nvmf_set_config", 00:04:20.789 "params": { 00:04:20.789 "discovery_filter": "match_any", 00:04:20.789 "admin_cmd_passthru": { 00:04:20.789 "identify_ctrlr": false 00:04:20.789 }, 00:04:20.789 "dhchap_digests": [ 00:04:20.789 "sha256", 00:04:20.789 "sha384", 00:04:20.789 "sha512" 00:04:20.789 ], 00:04:20.789 "dhchap_dhgroups": [ 00:04:20.789 "null", 00:04:20.789 "ffdhe2048", 00:04:20.789 "ffdhe3072", 00:04:20.789 "ffdhe4096", 00:04:20.789 "ffdhe6144", 00:04:20.789 "ffdhe8192" 00:04:20.789 ] 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "nvmf_set_max_subsystems", 00:04:20.789 "params": { 00:04:20.789 "max_subsystems": 1024 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "nvmf_set_crdt", 00:04:20.789 "params": { 00:04:20.789 "crdt1": 0, 00:04:20.789 "crdt2": 0, 00:04:20.789 "crdt3": 0 00:04:20.789 } 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "method": "nvmf_create_transport", 00:04:20.789 "params": { 00:04:20.789 "trtype": "TCP", 00:04:20.789 "max_queue_depth": 128, 00:04:20.789 "max_io_qpairs_per_ctrlr": 127, 00:04:20.789 "in_capsule_data_size": 4096, 00:04:20.789 "max_io_size": 131072, 00:04:20.789 "io_unit_size": 131072, 00:04:20.789 "max_aq_depth": 128, 00:04:20.789 "num_shared_buffers": 511, 00:04:20.789 "buf_cache_size": 4294967295, 00:04:20.789 "dif_insert_or_strip": false, 00:04:20.789 "zcopy": false, 00:04:20.789 "c2h_success": true, 00:04:20.789 "sock_priority": 0, 00:04:20.789 "abort_timeout_sec": 1, 00:04:20.789 "ack_timeout": 0, 00:04:20.789 "data_wr_pool_size": 0 00:04:20.789 } 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 }, 00:04:20.789 { 00:04:20.789 "subsystem": "iscsi", 00:04:20.789 "config": [ 00:04:20.789 { 00:04:20.789 "method": "iscsi_set_options", 00:04:20.789 "params": { 00:04:20.789 "node_base": "iqn.2016-06.io.spdk", 00:04:20.789 "max_sessions": 128, 00:04:20.789 "max_connections_per_session": 2, 00:04:20.789 "max_queue_depth": 64, 00:04:20.789 "default_time2wait": 2, 00:04:20.789 "default_time2retain": 20, 00:04:20.789 "first_burst_length": 8192, 00:04:20.789 "immediate_data": true, 00:04:20.789 "allow_duplicated_isid": false, 00:04:20.789 "error_recovery_level": 0, 00:04:20.789 "nop_timeout": 60, 00:04:20.789 "nop_in_interval": 30, 00:04:20.789 "disable_chap": false, 00:04:20.789 "require_chap": false, 00:04:20.789 "mutual_chap": false, 00:04:20.789 "chap_group": 0, 00:04:20.789 "max_large_datain_per_connection": 64, 00:04:20.789 "max_r2t_per_connection": 4, 00:04:20.789 "pdu_pool_size": 36864, 00:04:20.789 "immediate_data_pool_size": 16384, 00:04:20.789 "data_out_pool_size": 2048 00:04:20.789 } 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 } 00:04:20.789 ] 00:04:20.789 } 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 151963 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 151963 ']' 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 151963 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 151963 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 151963' 00:04:20.789 killing process with pid 151963 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 151963 00:04:20.789 10:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 151963 00:04:21.050 10:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=152303 00:04:21.050 10:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.050 10:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 152303 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 152303 ']' 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 152303 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 152303 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 152303' 00:04:26.335 killing process with pid 152303 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 152303 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 152303 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.335 00:04:26.335 real 0m6.559s 00:04:26.335 user 0m6.497s 00:04:26.335 sys 0m0.542s 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.335 10:44:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.335 ************************************ 00:04:26.335 END TEST skip_rpc_with_json 00:04:26.335 ************************************ 00:04:26.335 10:44:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.335 10:44:45 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.335 10:44:45 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.335 10:44:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.335 ************************************ 00:04:26.335 START TEST skip_rpc_with_delay 00:04:26.335 ************************************ 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.336 [2024-11-15 10:44:45.806408] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:26.336 00:04:26.336 real 0m0.075s 00:04:26.336 user 0m0.050s 00:04:26.336 sys 0m0.024s 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.336 10:44:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.336 ************************************ 00:04:26.336 END TEST skip_rpc_with_delay 00:04:26.336 ************************************ 00:04:26.336 10:44:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.336 10:44:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.336 10:44:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.336 10:44:45 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.336 10:44:45 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.336 10:44:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.596 ************************************ 00:04:26.596 START TEST exit_on_failed_rpc_init 00:04:26.596 ************************************ 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=153368 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 153368 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 153368 ']' 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.596 10:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.596 [2024-11-15 10:44:45.958482] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:26.596 [2024-11-15 10:44:45.958535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153368 ] 00:04:26.596 [2024-11-15 10:44:46.043650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.596 [2024-11-15 10:44:46.074797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.539 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.540 [2024-11-15 10:44:46.818090] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:27.540 [2024-11-15 10:44:46.818142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153553 ] 00:04:27.540 [2024-11-15 10:44:46.907055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.540 [2024-11-15 10:44:46.943291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.540 [2024-11-15 10:44:46.943343] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.540 [2024-11-15 10:44:46.943353] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.540 [2024-11-15 10:44:46.943360] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 153368 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 153368 ']' 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 153368 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:27.540 10:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 153368 00:04:27.540 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:27.540 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:27.540 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 153368' 00:04:27.540 killing process with pid 153368 00:04:27.540 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 153368 00:04:27.540 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 153368 00:04:27.801 00:04:27.801 real 0m1.334s 00:04:27.801 user 0m1.571s 00:04:27.801 sys 0m0.382s 00:04:27.801 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.801 10:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.801 ************************************ 00:04:27.801 END TEST exit_on_failed_rpc_init 00:04:27.801 ************************************ 00:04:27.801 10:44:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.801 00:04:27.801 real 0m13.749s 00:04:27.801 user 0m13.371s 00:04:27.801 sys 0m1.552s 00:04:27.801 10:44:47 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.801 10:44:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.801 ************************************ 00:04:27.801 END TEST skip_rpc 00:04:27.801 ************************************ 00:04:27.801 10:44:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:27.801 10:44:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.801 10:44:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.801 10:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 ************************************ 00:04:28.062 START TEST rpc_client 00:04:28.062 ************************************ 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.062 * Looking for test storage... 00:04:28.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.062 10:44:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.062 --rc genhtml_branch_coverage=1 00:04:28.062 --rc genhtml_function_coverage=1 00:04:28.062 --rc genhtml_legend=1 00:04:28.062 --rc geninfo_all_blocks=1 00:04:28.062 --rc geninfo_unexecuted_blocks=1 00:04:28.062 00:04:28.062 ' 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.062 --rc genhtml_branch_coverage=1 00:04:28.062 --rc genhtml_function_coverage=1 00:04:28.062 --rc genhtml_legend=1 00:04:28.062 --rc geninfo_all_blocks=1 00:04:28.062 --rc geninfo_unexecuted_blocks=1 00:04:28.062 00:04:28.062 ' 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.062 --rc genhtml_branch_coverage=1 00:04:28.062 --rc genhtml_function_coverage=1 00:04:28.062 --rc genhtml_legend=1 00:04:28.062 --rc geninfo_all_blocks=1 00:04:28.062 --rc geninfo_unexecuted_blocks=1 00:04:28.062 00:04:28.062 ' 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.062 --rc genhtml_branch_coverage=1 00:04:28.062 --rc genhtml_function_coverage=1 00:04:28.062 --rc genhtml_legend=1 00:04:28.062 --rc geninfo_all_blocks=1 00:04:28.062 --rc geninfo_unexecuted_blocks=1 00:04:28.062 00:04:28.062 ' 00:04:28.062 10:44:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.062 OK 00:04:28.062 10:44:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.062 00:04:28.062 real 0m0.226s 00:04:28.062 user 0m0.128s 00:04:28.062 sys 0m0.111s 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.062 10:44:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 ************************************ 00:04:28.062 END TEST rpc_client 00:04:28.062 ************************************ 00:04:28.324 10:44:47 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.324 10:44:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.324 10:44:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.324 10:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.324 ************************************ 00:04:28.324 START TEST json_config 00:04:28.324 ************************************ 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.324 10:44:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.324 10:44:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.324 10:44:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.324 10:44:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.324 10:44:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.324 10:44:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:28.324 10:44:47 json_config -- scripts/common.sh@345 -- # : 1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.324 10:44:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.324 10:44:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@353 -- # local d=1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.324 10:44:47 json_config -- scripts/common.sh@355 -- # echo 1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.324 10:44:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@353 -- # local d=2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.324 10:44:47 json_config -- scripts/common.sh@355 -- # echo 2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.324 10:44:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.324 10:44:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.324 10:44:47 json_config -- scripts/common.sh@368 -- # return 0 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.324 --rc genhtml_branch_coverage=1 00:04:28.324 --rc genhtml_function_coverage=1 00:04:28.324 --rc genhtml_legend=1 00:04:28.324 --rc geninfo_all_blocks=1 00:04:28.324 --rc geninfo_unexecuted_blocks=1 00:04:28.324 00:04:28.324 ' 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.324 --rc genhtml_branch_coverage=1 00:04:28.324 --rc genhtml_function_coverage=1 00:04:28.324 --rc genhtml_legend=1 00:04:28.324 --rc geninfo_all_blocks=1 00:04:28.324 --rc geninfo_unexecuted_blocks=1 00:04:28.324 00:04:28.324 ' 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.324 --rc genhtml_branch_coverage=1 00:04:28.324 --rc genhtml_function_coverage=1 00:04:28.324 --rc genhtml_legend=1 00:04:28.324 --rc geninfo_all_blocks=1 00:04:28.324 --rc geninfo_unexecuted_blocks=1 00:04:28.324 00:04:28.324 ' 00:04:28.324 10:44:47 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.324 --rc genhtml_branch_coverage=1 00:04:28.324 --rc genhtml_function_coverage=1 00:04:28.324 --rc genhtml_legend=1 00:04:28.324 --rc geninfo_all_blocks=1 00:04:28.324 --rc geninfo_unexecuted_blocks=1 00:04:28.324 00:04:28.324 ' 00:04:28.324 10:44:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.324 10:44:47 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.324 10:44:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.586 10:44:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.586 10:44:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.586 10:44:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.586 10:44:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.586 10:44:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.586 10:44:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.586 10:44:47 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.586 10:44:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@51 -- # : 0 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.586 10:44:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:28.586 INFO: JSON configuration test init 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.586 10:44:47 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.586 10:44:47 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.586 10:44:47 json_config -- json_config/common.sh@10 -- # shift 00:04:28.586 10:44:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.586 10:44:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.586 10:44:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.586 10:44:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.586 10:44:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.586 10:44:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=153840 00:04:28.586 10:44:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.586 Waiting for target to run... 00:04:28.586 10:44:47 json_config -- json_config/common.sh@25 -- # waitforlisten 153840 /var/tmp/spdk_tgt.sock 00:04:28.586 10:44:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@833 -- # '[' -z 153840 ']' 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:28.586 10:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.586 [2024-11-15 10:44:47.951734] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:28.586 [2024-11-15 10:44:47.951805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153840 ] 00:04:28.847 [2024-11-15 10:44:48.249885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.847 [2024-11-15 10:44:48.274103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.418 10:44:48 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.418 10:44:48 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:29.418 10:44:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.418 00:04:29.418 10:44:48 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:29.418 10:44:48 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:29.418 10:44:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.418 10:44:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.418 10:44:48 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:29.418 10:44:48 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:29.418 10:44:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.418 10:44:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.418 10:44:48 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.418 10:44:48 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:29.418 10:44:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.989 10:44:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.989 10:44:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:29.989 10:44:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:29.989 10:44:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@54 -- # sort 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:30.250 10:44:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.250 10:44:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:30.250 10:44:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.250 10:44:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.250 10:44:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.250 MallocForNvmf0 00:04:30.250 10:44:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.250 10:44:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.511 MallocForNvmf1 00:04:30.511 10:44:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.511 10:44:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.772 [2024-11-15 10:44:50.065799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.772 10:44:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.772 10:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.772 10:44:50 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.772 10:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.033 10:44:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.033 10:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.293 10:44:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.293 10:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.293 [2024-11-15 10:44:50.767943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.293 10:44:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.293 10:44:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.293 10:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.293 10:44:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.293 10:44:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.293 10:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.553 10:44:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.553 10:44:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.553 10:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.553 MallocBdevForConfigChangeCheck 00:04:31.553 10:44:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.553 10:44:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.553 10:44:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.553 10:44:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.553 10:44:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.125 10:44:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:32.125 INFO: shutting down applications... 00:04:32.125 10:44:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:32.125 10:44:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:32.125 10:44:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:32.125 10:44:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.386 Calling clear_iscsi_subsystem 00:04:32.386 Calling clear_nvmf_subsystem 00:04:32.386 Calling clear_nbd_subsystem 00:04:32.386 Calling clear_ublk_subsystem 00:04:32.386 Calling clear_vhost_blk_subsystem 00:04:32.386 Calling clear_vhost_scsi_subsystem 00:04:32.386 Calling clear_bdev_subsystem 00:04:32.386 10:44:51 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:32.386 10:44:51 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:32.386 10:44:51 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:32.386 10:44:51 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.386 10:44:51 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.386 10:44:51 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:32.647 10:44:52 json_config -- json_config/json_config.sh@352 -- # break 00:04:32.647 10:44:52 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:32.647 10:44:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:32.647 10:44:52 json_config -- json_config/common.sh@31 -- # local app=target 00:04:32.647 10:44:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:32.647 10:44:52 json_config -- json_config/common.sh@35 -- # [[ -n 153840 ]] 00:04:32.647 10:44:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 153840 00:04:32.647 10:44:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:32.647 10:44:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.647 10:44:52 json_config -- json_config/common.sh@41 -- # kill -0 153840 00:04:32.647 10:44:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.219 10:44:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.219 10:44:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.219 10:44:52 json_config -- json_config/common.sh@41 -- # kill -0 153840 00:04:33.219 10:44:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.219 10:44:52 json_config -- json_config/common.sh@43 -- # break 00:04:33.219 10:44:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.219 10:44:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.219 SPDK target shutdown done 00:04:33.219 10:44:52 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:33.219 INFO: relaunching applications... 00:04:33.219 10:44:52 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.219 10:44:52 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.219 10:44:52 json_config -- json_config/common.sh@10 -- # shift 00:04:33.219 10:44:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.219 10:44:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.219 10:44:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.219 10:44:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.219 10:44:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.219 10:44:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=154979 00:04:33.219 10:44:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.219 Waiting for target to run... 00:04:33.219 10:44:52 json_config -- json_config/common.sh@25 -- # waitforlisten 154979 /var/tmp/spdk_tgt.sock 00:04:33.219 10:44:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.219 10:44:52 json_config -- common/autotest_common.sh@833 -- # '[' -z 154979 ']' 00:04:33.219 10:44:52 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.219 10:44:52 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:33.219 10:44:52 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.219 10:44:52 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:33.219 10:44:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.480 [2024-11-15 10:44:52.749587] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:33.480 [2024-11-15 10:44:52.749665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154979 ] 00:04:33.741 [2024-11-15 10:44:53.052943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.741 [2024-11-15 10:44:53.083918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.314 [2024-11-15 10:44:53.584191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.314 [2024-11-15 10:44:53.616585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.314 10:44:53 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.314 10:44:53 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:34.314 10:44:53 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.314 00:04:34.314 10:44:53 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.314 10:44:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.314 INFO: Checking if target configuration is the same... 00:04:34.314 10:44:53 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.314 10:44:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.314 10:44:53 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.314 + '[' 2 -ne 2 ']' 00:04:34.314 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.314 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.314 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.314 +++ basename /dev/fd/62 00:04:34.314 ++ mktemp /tmp/62.XXX 00:04:34.314 + tmp_file_1=/tmp/62.Tc7 00:04:34.314 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.314 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.314 + tmp_file_2=/tmp/spdk_tgt_config.json.YOC 00:04:34.314 + ret=0 00:04:34.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.574 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.574 + diff -u /tmp/62.Tc7 /tmp/spdk_tgt_config.json.YOC 00:04:34.574 + echo 'INFO: JSON config files are the same' 00:04:34.574 INFO: JSON config files are the same 00:04:34.574 + rm /tmp/62.Tc7 /tmp/spdk_tgt_config.json.YOC 00:04:34.574 + exit 0 00:04:34.574 10:44:54 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:34.574 10:44:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.574 INFO: changing configuration and checking if this can be detected... 00:04:34.574 10:44:54 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.574 10:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.835 10:44:54 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.835 10:44:54 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:34.835 10:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.835 + '[' 2 -ne 2 ']' 00:04:34.835 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.835 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.835 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.835 +++ basename /dev/fd/62 00:04:34.835 ++ mktemp /tmp/62.XXX 00:04:34.835 + tmp_file_1=/tmp/62.sfD 00:04:34.835 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.835 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.835 + tmp_file_2=/tmp/spdk_tgt_config.json.7v0 00:04:34.835 + ret=0 00:04:34.835 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.095 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.095 + diff -u /tmp/62.sfD /tmp/spdk_tgt_config.json.7v0 00:04:35.095 + ret=1 00:04:35.095 + echo '=== Start of file: /tmp/62.sfD ===' 00:04:35.095 + cat /tmp/62.sfD 00:04:35.095 + echo '=== End of file: /tmp/62.sfD ===' 00:04:35.095 + echo '' 00:04:35.095 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7v0 ===' 00:04:35.095 + cat /tmp/spdk_tgt_config.json.7v0 00:04:35.095 + echo '=== End of file: /tmp/spdk_tgt_config.json.7v0 ===' 00:04:35.095 + echo '' 00:04:35.095 + rm /tmp/62.sfD /tmp/spdk_tgt_config.json.7v0 00:04:35.095 + exit 1 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:35.095 INFO: configuration change detected. 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:35.095 10:44:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.095 10:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@324 -- # [[ -n 154979 ]] 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.095 10:44:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.095 10:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:35.095 10:44:54 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:35.355 10:44:54 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:35.355 10:44:54 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.355 10:44:54 json_config -- json_config/json_config.sh@330 -- # killprocess 154979 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@952 -- # '[' -z 154979 ']' 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@956 -- # kill -0 154979 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@957 -- # uname 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 154979 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 154979' 00:04:35.355 killing process with pid 154979 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@971 -- # kill 154979 00:04:35.355 10:44:54 json_config -- common/autotest_common.sh@976 -- # wait 154979 00:04:35.616 10:44:54 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.616 10:44:54 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:35.616 10:44:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.616 10:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.616 10:44:55 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:35.616 10:44:55 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:35.616 INFO: Success 00:04:35.616 00:04:35.616 real 0m7.371s 00:04:35.616 user 0m8.929s 00:04:35.616 sys 0m1.952s 00:04:35.616 10:44:55 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.616 10:44:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.616 ************************************ 00:04:35.616 END TEST json_config 00:04:35.616 ************************************ 00:04:35.616 10:44:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:35.616 10:44:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.616 10:44:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.616 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:04:35.616 ************************************ 00:04:35.616 START TEST json_config_extra_key 00:04:35.616 ************************************ 00:04:35.616 10:44:55 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.879 10:44:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.879 --rc genhtml_branch_coverage=1 00:04:35.879 --rc genhtml_function_coverage=1 00:04:35.879 --rc genhtml_legend=1 00:04:35.879 --rc geninfo_all_blocks=1 00:04:35.879 --rc geninfo_unexecuted_blocks=1 00:04:35.879 00:04:35.879 ' 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.879 --rc genhtml_branch_coverage=1 00:04:35.879 --rc genhtml_function_coverage=1 00:04:35.879 --rc genhtml_legend=1 00:04:35.879 --rc geninfo_all_blocks=1 00:04:35.879 --rc geninfo_unexecuted_blocks=1 00:04:35.879 00:04:35.879 ' 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.879 --rc genhtml_branch_coverage=1 00:04:35.879 --rc genhtml_function_coverage=1 00:04:35.879 --rc genhtml_legend=1 00:04:35.879 --rc geninfo_all_blocks=1 00:04:35.879 --rc geninfo_unexecuted_blocks=1 00:04:35.879 00:04:35.879 ' 00:04:35.879 10:44:55 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.879 --rc genhtml_branch_coverage=1 00:04:35.879 --rc genhtml_function_coverage=1 00:04:35.879 --rc genhtml_legend=1 00:04:35.879 --rc geninfo_all_blocks=1 00:04:35.879 --rc geninfo_unexecuted_blocks=1 00:04:35.879 00:04:35.879 ' 00:04:35.879 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.879 10:44:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:35.880 10:44:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.880 10:44:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.880 10:44:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.880 10:44:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.880 10:44:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.880 10:44:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.880 10:44:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.880 10:44:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:35.880 10:44:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.880 10:44:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:35.880 INFO: launching applications... 00:04:35.880 10:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=155695 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.880 Waiting for target to run... 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 155695 /var/tmp/spdk_tgt.sock 00:04:35.880 10:44:55 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 155695 ']' 00:04:35.880 10:44:55 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.880 10:44:55 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.880 10:44:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:35.880 10:44:55 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.880 10:44:55 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.880 10:44:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.880 [2024-11-15 10:44:55.385772] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:35.880 [2024-11-15 10:44:55.385845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155695 ] 00:04:36.451 [2024-11-15 10:44:55.686949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.451 [2024-11-15 10:44:55.718269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.712 10:44:56 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.712 10:44:56 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:36.712 00:04:36.712 10:44:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:36.712 INFO: shutting down applications... 00:04:36.712 10:44:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 155695 ]] 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 155695 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155695 00:04:36.712 10:44:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155695 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.285 10:44:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.285 SPDK target shutdown done 00:04:37.285 10:44:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.285 Success 00:04:37.285 00:04:37.285 real 0m1.565s 00:04:37.285 user 0m1.165s 00:04:37.285 sys 0m0.414s 00:04:37.285 10:44:56 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.285 10:44:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.285 ************************************ 00:04:37.285 END TEST json_config_extra_key 00:04:37.285 ************************************ 00:04:37.285 10:44:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.285 10:44:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.285 10:44:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.285 10:44:56 -- common/autotest_common.sh@10 -- # set +x 00:04:37.285 ************************************ 00:04:37.285 START TEST alias_rpc 00:04:37.285 ************************************ 00:04:37.285 10:44:56 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.547 * Looking for test storage... 00:04:37.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:37.547 10:44:56 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.547 10:44:56 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.547 10:44:56 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.547 10:44:56 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.547 10:44:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.548 --rc genhtml_branch_coverage=1 00:04:37.548 --rc genhtml_function_coverage=1 00:04:37.548 --rc genhtml_legend=1 00:04:37.548 --rc geninfo_all_blocks=1 00:04:37.548 --rc geninfo_unexecuted_blocks=1 00:04:37.548 00:04:37.548 ' 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.548 --rc genhtml_branch_coverage=1 00:04:37.548 --rc genhtml_function_coverage=1 00:04:37.548 --rc genhtml_legend=1 00:04:37.548 --rc geninfo_all_blocks=1 00:04:37.548 --rc geninfo_unexecuted_blocks=1 00:04:37.548 00:04:37.548 ' 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.548 --rc genhtml_branch_coverage=1 00:04:37.548 --rc genhtml_function_coverage=1 00:04:37.548 --rc genhtml_legend=1 00:04:37.548 --rc geninfo_all_blocks=1 00:04:37.548 --rc geninfo_unexecuted_blocks=1 00:04:37.548 00:04:37.548 ' 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.548 --rc genhtml_branch_coverage=1 00:04:37.548 --rc genhtml_function_coverage=1 00:04:37.548 --rc genhtml_legend=1 00:04:37.548 --rc geninfo_all_blocks=1 00:04:37.548 --rc geninfo_unexecuted_blocks=1 00:04:37.548 00:04:37.548 ' 00:04:37.548 10:44:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.548 10:44:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=156049 00:04:37.548 10:44:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 156049 00:04:37.548 10:44:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 156049 ']' 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:37.548 10:44:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.548 [2024-11-15 10:44:57.018392] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:37.548 [2024-11-15 10:44:57.018464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156049 ] 00:04:37.809 [2024-11-15 10:44:57.103389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.809 [2024-11-15 10:44:57.138492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.382 10:44:57 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.382 10:44:57 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.382 10:44:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:38.643 10:44:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 156049 00:04:38.643 10:44:57 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 156049 ']' 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 156049 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 156049 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 156049' 00:04:38.643 killing process with pid 156049 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@971 -- # kill 156049 00:04:38.643 10:44:58 alias_rpc -- common/autotest_common.sh@976 -- # wait 156049 00:04:38.904 00:04:38.904 real 0m1.496s 00:04:38.904 user 0m1.638s 00:04:38.904 sys 0m0.425s 00:04:38.904 10:44:58 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:38.904 10:44:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.904 ************************************ 00:04:38.904 END TEST alias_rpc 00:04:38.904 ************************************ 00:04:38.904 10:44:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:38.904 10:44:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.904 10:44:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:38.905 10:44:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.905 10:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:38.905 ************************************ 00:04:38.905 START TEST spdkcli_tcp 00:04:38.905 ************************************ 00:04:38.905 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.905 * Looking for test storage... 00:04:38.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:38.905 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.167 10:44:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.167 --rc genhtml_branch_coverage=1 00:04:39.167 --rc genhtml_function_coverage=1 00:04:39.167 --rc genhtml_legend=1 00:04:39.167 --rc geninfo_all_blocks=1 00:04:39.167 --rc geninfo_unexecuted_blocks=1 00:04:39.167 00:04:39.167 ' 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.167 --rc genhtml_branch_coverage=1 00:04:39.167 --rc genhtml_function_coverage=1 00:04:39.167 --rc genhtml_legend=1 00:04:39.167 --rc geninfo_all_blocks=1 00:04:39.167 --rc geninfo_unexecuted_blocks=1 00:04:39.167 00:04:39.167 ' 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.167 --rc genhtml_branch_coverage=1 00:04:39.167 --rc genhtml_function_coverage=1 00:04:39.167 --rc genhtml_legend=1 00:04:39.167 --rc geninfo_all_blocks=1 00:04:39.167 --rc geninfo_unexecuted_blocks=1 00:04:39.167 00:04:39.167 ' 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.167 --rc genhtml_branch_coverage=1 00:04:39.167 --rc genhtml_function_coverage=1 00:04:39.167 --rc genhtml_legend=1 00:04:39.167 --rc geninfo_all_blocks=1 00:04:39.167 --rc geninfo_unexecuted_blocks=1 00:04:39.167 00:04:39.167 ' 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.167 10:44:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=156393 00:04:39.167 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 156393 00:04:39.168 10:44:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.168 10:44:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 156393 ']' 00:04:39.168 10:44:58 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.168 10:44:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.168 10:44:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.168 10:44:58 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.168 10:44:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 [2024-11-15 10:44:58.601181] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:39.168 [2024-11-15 10:44:58.601253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156393 ] 00:04:39.168 [2024-11-15 10:44:58.690268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.428 [2024-11-15 10:44:58.726321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.428 [2024-11-15 10:44:58.726323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.998 10:44:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.998 10:44:59 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:39.999 10:44:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=156571 00:04:39.999 10:44:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:39.999 10:44:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.259 [ 00:04:40.259 "bdev_malloc_delete", 00:04:40.259 "bdev_malloc_create", 00:04:40.259 "bdev_null_resize", 00:04:40.259 "bdev_null_delete", 00:04:40.259 "bdev_null_create", 00:04:40.259 "bdev_nvme_cuse_unregister", 00:04:40.259 "bdev_nvme_cuse_register", 00:04:40.259 "bdev_opal_new_user", 00:04:40.259 "bdev_opal_set_lock_state", 00:04:40.259 "bdev_opal_delete", 00:04:40.259 "bdev_opal_get_info", 00:04:40.259 "bdev_opal_create", 00:04:40.259 "bdev_nvme_opal_revert", 00:04:40.259 "bdev_nvme_opal_init", 00:04:40.259 "bdev_nvme_send_cmd", 00:04:40.259 "bdev_nvme_set_keys", 00:04:40.259 "bdev_nvme_get_path_iostat", 00:04:40.259 "bdev_nvme_get_mdns_discovery_info", 00:04:40.259 "bdev_nvme_stop_mdns_discovery", 00:04:40.259 "bdev_nvme_start_mdns_discovery", 00:04:40.259 "bdev_nvme_set_multipath_policy", 00:04:40.259 "bdev_nvme_set_preferred_path", 00:04:40.259 "bdev_nvme_get_io_paths", 00:04:40.259 "bdev_nvme_remove_error_injection", 00:04:40.259 "bdev_nvme_add_error_injection", 00:04:40.259 "bdev_nvme_get_discovery_info", 00:04:40.259 "bdev_nvme_stop_discovery", 00:04:40.259 "bdev_nvme_start_discovery", 00:04:40.259 "bdev_nvme_get_controller_health_info", 00:04:40.259 "bdev_nvme_disable_controller", 00:04:40.259 "bdev_nvme_enable_controller", 00:04:40.259 "bdev_nvme_reset_controller", 00:04:40.259 "bdev_nvme_get_transport_statistics", 00:04:40.259 "bdev_nvme_apply_firmware", 00:04:40.259 "bdev_nvme_detach_controller", 00:04:40.259 "bdev_nvme_get_controllers", 00:04:40.259 "bdev_nvme_attach_controller", 00:04:40.259 "bdev_nvme_set_hotplug", 00:04:40.259 "bdev_nvme_set_options", 00:04:40.259 "bdev_passthru_delete", 00:04:40.259 "bdev_passthru_create", 00:04:40.259 "bdev_lvol_set_parent_bdev", 00:04:40.259 "bdev_lvol_set_parent", 00:04:40.259 "bdev_lvol_check_shallow_copy", 00:04:40.259 "bdev_lvol_start_shallow_copy", 00:04:40.259 "bdev_lvol_grow_lvstore", 00:04:40.259 "bdev_lvol_get_lvols", 00:04:40.259 "bdev_lvol_get_lvstores", 00:04:40.259 "bdev_lvol_delete", 00:04:40.259 "bdev_lvol_set_read_only", 00:04:40.259 "bdev_lvol_resize", 00:04:40.259 "bdev_lvol_decouple_parent", 00:04:40.259 "bdev_lvol_inflate", 00:04:40.259 "bdev_lvol_rename", 00:04:40.259 "bdev_lvol_clone_bdev", 00:04:40.259 "bdev_lvol_clone", 00:04:40.259 "bdev_lvol_snapshot", 00:04:40.259 "bdev_lvol_create", 00:04:40.259 "bdev_lvol_delete_lvstore", 00:04:40.259 "bdev_lvol_rename_lvstore", 00:04:40.259 "bdev_lvol_create_lvstore", 00:04:40.259 "bdev_raid_set_options", 00:04:40.259 "bdev_raid_remove_base_bdev", 00:04:40.259 "bdev_raid_add_base_bdev", 00:04:40.259 "bdev_raid_delete", 00:04:40.259 "bdev_raid_create", 00:04:40.259 "bdev_raid_get_bdevs", 00:04:40.259 "bdev_error_inject_error", 00:04:40.259 "bdev_error_delete", 00:04:40.259 "bdev_error_create", 00:04:40.259 "bdev_split_delete", 00:04:40.259 "bdev_split_create", 00:04:40.259 "bdev_delay_delete", 00:04:40.259 "bdev_delay_create", 00:04:40.259 "bdev_delay_update_latency", 00:04:40.259 "bdev_zone_block_delete", 00:04:40.259 "bdev_zone_block_create", 00:04:40.259 "blobfs_create", 00:04:40.259 "blobfs_detect", 00:04:40.259 "blobfs_set_cache_size", 00:04:40.259 "bdev_aio_delete", 00:04:40.259 "bdev_aio_rescan", 00:04:40.259 "bdev_aio_create", 00:04:40.259 "bdev_ftl_set_property", 00:04:40.259 "bdev_ftl_get_properties", 00:04:40.259 "bdev_ftl_get_stats", 00:04:40.259 "bdev_ftl_unmap", 00:04:40.259 "bdev_ftl_unload", 00:04:40.259 "bdev_ftl_delete", 00:04:40.260 "bdev_ftl_load", 00:04:40.260 "bdev_ftl_create", 00:04:40.260 "bdev_virtio_attach_controller", 00:04:40.260 "bdev_virtio_scsi_get_devices", 00:04:40.260 "bdev_virtio_detach_controller", 00:04:40.260 "bdev_virtio_blk_set_hotplug", 00:04:40.260 "bdev_iscsi_delete", 00:04:40.260 "bdev_iscsi_create", 00:04:40.260 "bdev_iscsi_set_options", 00:04:40.260 "accel_error_inject_error", 00:04:40.260 "ioat_scan_accel_module", 00:04:40.260 "dsa_scan_accel_module", 00:04:40.260 "iaa_scan_accel_module", 00:04:40.260 "vfu_virtio_create_fs_endpoint", 00:04:40.260 "vfu_virtio_create_scsi_endpoint", 00:04:40.260 "vfu_virtio_scsi_remove_target", 00:04:40.260 "vfu_virtio_scsi_add_target", 00:04:40.260 "vfu_virtio_create_blk_endpoint", 00:04:40.260 "vfu_virtio_delete_endpoint", 00:04:40.260 "keyring_file_remove_key", 00:04:40.260 "keyring_file_add_key", 00:04:40.260 "keyring_linux_set_options", 00:04:40.260 "fsdev_aio_delete", 00:04:40.260 "fsdev_aio_create", 00:04:40.260 "iscsi_get_histogram", 00:04:40.260 "iscsi_enable_histogram", 00:04:40.260 "iscsi_set_options", 00:04:40.260 "iscsi_get_auth_groups", 00:04:40.260 "iscsi_auth_group_remove_secret", 00:04:40.260 "iscsi_auth_group_add_secret", 00:04:40.260 "iscsi_delete_auth_group", 00:04:40.260 "iscsi_create_auth_group", 00:04:40.260 "iscsi_set_discovery_auth", 00:04:40.260 "iscsi_get_options", 00:04:40.260 "iscsi_target_node_request_logout", 00:04:40.260 "iscsi_target_node_set_redirect", 00:04:40.260 "iscsi_target_node_set_auth", 00:04:40.260 "iscsi_target_node_add_lun", 00:04:40.260 "iscsi_get_stats", 00:04:40.260 "iscsi_get_connections", 00:04:40.260 "iscsi_portal_group_set_auth", 00:04:40.260 "iscsi_start_portal_group", 00:04:40.260 "iscsi_delete_portal_group", 00:04:40.260 "iscsi_create_portal_group", 00:04:40.260 "iscsi_get_portal_groups", 00:04:40.260 "iscsi_delete_target_node", 00:04:40.260 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.260 "iscsi_target_node_add_pg_ig_maps", 00:04:40.260 "iscsi_create_target_node", 00:04:40.260 "iscsi_get_target_nodes", 00:04:40.260 "iscsi_delete_initiator_group", 00:04:40.260 "iscsi_initiator_group_remove_initiators", 00:04:40.260 "iscsi_initiator_group_add_initiators", 00:04:40.260 "iscsi_create_initiator_group", 00:04:40.260 "iscsi_get_initiator_groups", 00:04:40.260 "nvmf_set_crdt", 00:04:40.260 "nvmf_set_config", 00:04:40.260 "nvmf_set_max_subsystems", 00:04:40.260 "nvmf_stop_mdns_prr", 00:04:40.260 "nvmf_publish_mdns_prr", 00:04:40.260 "nvmf_subsystem_get_listeners", 00:04:40.260 "nvmf_subsystem_get_qpairs", 00:04:40.260 "nvmf_subsystem_get_controllers", 00:04:40.260 "nvmf_get_stats", 00:04:40.260 "nvmf_get_transports", 00:04:40.260 "nvmf_create_transport", 00:04:40.260 "nvmf_get_targets", 00:04:40.260 "nvmf_delete_target", 00:04:40.260 "nvmf_create_target", 00:04:40.260 "nvmf_subsystem_allow_any_host", 00:04:40.260 "nvmf_subsystem_set_keys", 00:04:40.260 "nvmf_subsystem_remove_host", 00:04:40.260 "nvmf_subsystem_add_host", 00:04:40.260 "nvmf_ns_remove_host", 00:04:40.260 "nvmf_ns_add_host", 00:04:40.260 "nvmf_subsystem_remove_ns", 00:04:40.260 "nvmf_subsystem_set_ns_ana_group", 00:04:40.260 "nvmf_subsystem_add_ns", 00:04:40.260 "nvmf_subsystem_listener_set_ana_state", 00:04:40.260 "nvmf_discovery_get_referrals", 00:04:40.260 "nvmf_discovery_remove_referral", 00:04:40.260 "nvmf_discovery_add_referral", 00:04:40.260 "nvmf_subsystem_remove_listener", 00:04:40.260 "nvmf_subsystem_add_listener", 00:04:40.260 "nvmf_delete_subsystem", 00:04:40.260 "nvmf_create_subsystem", 00:04:40.260 "nvmf_get_subsystems", 00:04:40.260 "env_dpdk_get_mem_stats", 00:04:40.260 "nbd_get_disks", 00:04:40.260 "nbd_stop_disk", 00:04:40.260 "nbd_start_disk", 00:04:40.260 "ublk_recover_disk", 00:04:40.260 "ublk_get_disks", 00:04:40.260 "ublk_stop_disk", 00:04:40.260 "ublk_start_disk", 00:04:40.260 "ublk_destroy_target", 00:04:40.260 "ublk_create_target", 00:04:40.260 "virtio_blk_create_transport", 00:04:40.260 "virtio_blk_get_transports", 00:04:40.260 "vhost_controller_set_coalescing", 00:04:40.260 "vhost_get_controllers", 00:04:40.260 "vhost_delete_controller", 00:04:40.260 "vhost_create_blk_controller", 00:04:40.260 "vhost_scsi_controller_remove_target", 00:04:40.260 "vhost_scsi_controller_add_target", 00:04:40.260 "vhost_start_scsi_controller", 00:04:40.260 "vhost_create_scsi_controller", 00:04:40.260 "thread_set_cpumask", 00:04:40.260 "scheduler_set_options", 00:04:40.260 "framework_get_governor", 00:04:40.260 "framework_get_scheduler", 00:04:40.260 "framework_set_scheduler", 00:04:40.260 "framework_get_reactors", 00:04:40.260 "thread_get_io_channels", 00:04:40.260 "thread_get_pollers", 00:04:40.260 "thread_get_stats", 00:04:40.260 "framework_monitor_context_switch", 00:04:40.260 "spdk_kill_instance", 00:04:40.260 "log_enable_timestamps", 00:04:40.260 "log_get_flags", 00:04:40.260 "log_clear_flag", 00:04:40.260 "log_set_flag", 00:04:40.260 "log_get_level", 00:04:40.260 "log_set_level", 00:04:40.260 "log_get_print_level", 00:04:40.260 "log_set_print_level", 00:04:40.260 "framework_enable_cpumask_locks", 00:04:40.260 "framework_disable_cpumask_locks", 00:04:40.260 "framework_wait_init", 00:04:40.260 "framework_start_init", 00:04:40.260 "scsi_get_devices", 00:04:40.260 "bdev_get_histogram", 00:04:40.260 "bdev_enable_histogram", 00:04:40.260 "bdev_set_qos_limit", 00:04:40.260 "bdev_set_qd_sampling_period", 00:04:40.260 "bdev_get_bdevs", 00:04:40.260 "bdev_reset_iostat", 00:04:40.260 "bdev_get_iostat", 00:04:40.260 "bdev_examine", 00:04:40.260 "bdev_wait_for_examine", 00:04:40.260 "bdev_set_options", 00:04:40.260 "accel_get_stats", 00:04:40.260 "accel_set_options", 00:04:40.260 "accel_set_driver", 00:04:40.260 "accel_crypto_key_destroy", 00:04:40.260 "accel_crypto_keys_get", 00:04:40.260 "accel_crypto_key_create", 00:04:40.260 "accel_assign_opc", 00:04:40.260 "accel_get_module_info", 00:04:40.260 "accel_get_opc_assignments", 00:04:40.260 "vmd_rescan", 00:04:40.260 "vmd_remove_device", 00:04:40.260 "vmd_enable", 00:04:40.260 "sock_get_default_impl", 00:04:40.260 "sock_set_default_impl", 00:04:40.260 "sock_impl_set_options", 00:04:40.260 "sock_impl_get_options", 00:04:40.260 "iobuf_get_stats", 00:04:40.260 "iobuf_set_options", 00:04:40.260 "keyring_get_keys", 00:04:40.260 "vfu_tgt_set_base_path", 00:04:40.260 "framework_get_pci_devices", 00:04:40.260 "framework_get_config", 00:04:40.260 "framework_get_subsystems", 00:04:40.260 "fsdev_set_opts", 00:04:40.260 "fsdev_get_opts", 00:04:40.260 "trace_get_info", 00:04:40.260 "trace_get_tpoint_group_mask", 00:04:40.260 "trace_disable_tpoint_group", 00:04:40.260 "trace_enable_tpoint_group", 00:04:40.260 "trace_clear_tpoint_mask", 00:04:40.260 "trace_set_tpoint_mask", 00:04:40.260 "notify_get_notifications", 00:04:40.260 "notify_get_types", 00:04:40.260 "spdk_get_version", 00:04:40.260 "rpc_get_methods" 00:04:40.260 ] 00:04:40.260 10:44:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.260 10:44:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.260 10:44:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 156393 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 156393 ']' 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 156393 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 156393 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 156393' 00:04:40.260 killing process with pid 156393 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 156393 00:04:40.260 10:44:59 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 156393 00:04:40.522 00:04:40.522 real 0m1.505s 00:04:40.522 user 0m2.717s 00:04:40.522 sys 0m0.447s 00:04:40.522 10:44:59 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.522 10:44:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.522 ************************************ 00:04:40.522 END TEST spdkcli_tcp 00:04:40.522 ************************************ 00:04:40.522 10:44:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.522 10:44:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.522 10:44:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.522 10:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:40.522 ************************************ 00:04:40.522 START TEST dpdk_mem_utility 00:04:40.522 ************************************ 00:04:40.522 10:44:59 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.522 * Looking for test storage... 00:04:40.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:40.522 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.522 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.522 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.782 10:45:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:40.782 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=156743 00:04:40.782 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 156743 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 156743 ']' 00:04:40.782 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.782 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.782 [2024-11-15 10:45:00.186554] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:40.782 [2024-11-15 10:45:00.186651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156743 ] 00:04:40.782 [2024-11-15 10:45:00.275894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.782 [2024-11-15 10:45:00.311127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.776 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.776 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:41.776 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:41.776 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:41.776 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.776 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.776 { 00:04:41.776 "filename": "/tmp/spdk_mem_dump.txt" 00:04:41.776 } 00:04:41.776 10:45:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.776 10:45:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.776 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:41.776 1 heaps totaling size 818.000000 MiB 00:04:41.776 size: 818.000000 MiB heap id: 0 00:04:41.776 end heaps---------- 00:04:41.776 9 mempools totaling size 603.782043 MiB 00:04:41.776 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.776 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.776 size: 100.555481 MiB name: bdev_io_156743 00:04:41.776 size: 50.003479 MiB name: msgpool_156743 00:04:41.776 size: 36.509338 MiB name: fsdev_io_156743 00:04:41.776 size: 21.763794 MiB name: PDU_Pool 00:04:41.776 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.776 size: 4.133484 MiB name: evtpool_156743 00:04:41.776 size: 0.026123 MiB name: Session_Pool 00:04:41.776 end mempools------- 00:04:41.776 6 memzones totaling size 4.142822 MiB 00:04:41.776 size: 1.000366 MiB name: RG_ring_0_156743 00:04:41.776 size: 1.000366 MiB name: RG_ring_1_156743 00:04:41.776 size: 1.000366 MiB name: RG_ring_4_156743 00:04:41.776 size: 1.000366 MiB name: RG_ring_5_156743 00:04:41.776 size: 0.125366 MiB name: RG_ring_2_156743 00:04:41.776 size: 0.015991 MiB name: RG_ring_3_156743 00:04:41.776 end memzones------- 00:04:41.776 10:45:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:41.776 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:41.776 list of free elements. size: 10.852478 MiB 00:04:41.776 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:41.776 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:41.776 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:41.776 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:41.776 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:41.776 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:41.776 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:41.776 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:41.776 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:41.777 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:41.777 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:41.777 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:41.777 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:41.777 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:41.777 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:41.777 list of standard malloc elements. size: 199.218628 MiB 00:04:41.777 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:41.777 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:41.777 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:41.777 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:41.777 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:41.777 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:41.777 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:41.777 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:41.777 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:41.777 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:41.777 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:41.777 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:41.777 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:41.777 list of memzone associated elements. size: 607.928894 MiB 00:04:41.777 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:41.777 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:41.777 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:41.777 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:41.777 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:41.777 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_156743_0 00:04:41.777 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:41.777 associated memzone info: size: 48.002930 MiB name: MP_msgpool_156743_0 00:04:41.777 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:41.777 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_156743_0 00:04:41.777 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:41.777 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:41.777 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:41.777 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:41.777 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:41.777 associated memzone info: size: 3.000122 MiB name: MP_evtpool_156743_0 00:04:41.777 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:41.777 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_156743 00:04:41.777 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:41.777 associated memzone info: size: 1.007996 MiB name: MP_evtpool_156743 00:04:41.777 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:41.777 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:41.777 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:41.777 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:41.777 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:41.777 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:41.777 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:41.777 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:41.777 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:41.777 associated memzone info: size: 1.000366 MiB name: RG_ring_0_156743 00:04:41.777 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:41.777 associated memzone info: size: 1.000366 MiB name: RG_ring_1_156743 00:04:41.777 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:41.777 associated memzone info: size: 1.000366 MiB name: RG_ring_4_156743 00:04:41.777 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:41.777 associated memzone info: size: 1.000366 MiB name: RG_ring_5_156743 00:04:41.777 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:41.777 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_156743 00:04:41.777 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:41.777 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_156743 00:04:41.777 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:41.777 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:41.777 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:41.777 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:41.777 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:41.777 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:41.777 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:41.777 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_156743 00:04:41.777 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:41.777 associated memzone info: size: 0.125366 MiB name: RG_ring_2_156743 00:04:41.777 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:41.777 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:41.777 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:41.777 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:41.777 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:41.777 associated memzone info: size: 0.015991 MiB name: RG_ring_3_156743 00:04:41.777 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:41.777 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:41.777 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:41.777 associated memzone info: size: 0.000183 MiB name: MP_msgpool_156743 00:04:41.777 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:41.777 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_156743 00:04:41.777 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:41.777 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_156743 00:04:41.777 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:41.777 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:41.777 10:45:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:41.777 10:45:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 156743 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 156743 ']' 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 156743 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 156743 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 156743' 00:04:41.777 killing process with pid 156743 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 156743 00:04:41.777 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 156743 00:04:42.066 00:04:42.066 real 0m1.426s 00:04:42.066 user 0m1.502s 00:04:42.066 sys 0m0.436s 00:04:42.066 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.066 10:45:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.066 ************************************ 00:04:42.066 END TEST dpdk_mem_utility 00:04:42.066 ************************************ 00:04:42.066 10:45:01 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.066 10:45:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.066 10:45:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.066 10:45:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.066 ************************************ 00:04:42.066 START TEST event 00:04:42.066 ************************************ 00:04:42.066 10:45:01 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.066 * Looking for test storage... 00:04:42.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.066 10:45:01 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.066 10:45:01 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.066 10:45:01 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.343 10:45:01 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.343 10:45:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.343 10:45:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.343 10:45:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.343 10:45:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.343 10:45:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.343 10:45:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.343 10:45:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.343 10:45:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.343 10:45:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.343 10:45:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.343 10:45:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.343 10:45:01 event -- scripts/common.sh@344 -- # case "$op" in 00:04:42.343 10:45:01 event -- scripts/common.sh@345 -- # : 1 00:04:42.343 10:45:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.344 10:45:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.344 10:45:01 event -- scripts/common.sh@365 -- # decimal 1 00:04:42.344 10:45:01 event -- scripts/common.sh@353 -- # local d=1 00:04:42.344 10:45:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.344 10:45:01 event -- scripts/common.sh@355 -- # echo 1 00:04:42.344 10:45:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.344 10:45:01 event -- scripts/common.sh@366 -- # decimal 2 00:04:42.344 10:45:01 event -- scripts/common.sh@353 -- # local d=2 00:04:42.344 10:45:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.344 10:45:01 event -- scripts/common.sh@355 -- # echo 2 00:04:42.344 10:45:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.344 10:45:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.344 10:45:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.344 10:45:01 event -- scripts/common.sh@368 -- # return 0 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.344 --rc genhtml_branch_coverage=1 00:04:42.344 --rc genhtml_function_coverage=1 00:04:42.344 --rc genhtml_legend=1 00:04:42.344 --rc geninfo_all_blocks=1 00:04:42.344 --rc geninfo_unexecuted_blocks=1 00:04:42.344 00:04:42.344 ' 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.344 --rc genhtml_branch_coverage=1 00:04:42.344 --rc genhtml_function_coverage=1 00:04:42.344 --rc genhtml_legend=1 00:04:42.344 --rc geninfo_all_blocks=1 00:04:42.344 --rc geninfo_unexecuted_blocks=1 00:04:42.344 00:04:42.344 ' 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.344 --rc genhtml_branch_coverage=1 00:04:42.344 --rc genhtml_function_coverage=1 00:04:42.344 --rc genhtml_legend=1 00:04:42.344 --rc geninfo_all_blocks=1 00:04:42.344 --rc geninfo_unexecuted_blocks=1 00:04:42.344 00:04:42.344 ' 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.344 --rc genhtml_branch_coverage=1 00:04:42.344 --rc genhtml_function_coverage=1 00:04:42.344 --rc genhtml_legend=1 00:04:42.344 --rc geninfo_all_blocks=1 00:04:42.344 --rc geninfo_unexecuted_blocks=1 00:04:42.344 00:04:42.344 ' 00:04:42.344 10:45:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:42.344 10:45:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.344 10:45:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:42.344 10:45:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.344 10:45:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.344 ************************************ 00:04:42.344 START TEST event_perf 00:04:42.344 ************************************ 00:04:42.344 10:45:01 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.344 Running I/O for 1 seconds...[2024-11-15 10:45:01.683267] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:42.344 [2024-11-15 10:45:01.683383] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157168 ] 00:04:42.344 [2024-11-15 10:45:01.774534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.344 [2024-11-15 10:45:01.819834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.344 [2024-11-15 10:45:01.819988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.344 [2024-11-15 10:45:01.820144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.344 Running I/O for 1 seconds...[2024-11-15 10:45:01.820145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.316 00:04:43.316 lcore 0: 178984 00:04:43.316 lcore 1: 178987 00:04:43.316 lcore 2: 178988 00:04:43.316 lcore 3: 178989 00:04:43.316 done. 00:04:43.316 00:04:43.316 real 0m1.187s 00:04:43.316 user 0m4.091s 00:04:43.316 sys 0m0.092s 00:04:43.577 10:45:02 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.577 10:45:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 ************************************ 00:04:43.577 END TEST event_perf 00:04:43.577 ************************************ 00:04:43.577 10:45:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:43.577 10:45:02 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:43.577 10:45:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.577 10:45:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 ************************************ 00:04:43.577 START TEST event_reactor 00:04:43.577 ************************************ 00:04:43.577 10:45:02 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:43.577 [2024-11-15 10:45:02.946753] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:43.577 [2024-11-15 10:45:02.946848] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157525 ] 00:04:43.577 [2024-11-15 10:45:03.037480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.577 [2024-11-15 10:45:03.074940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.961 test_start 00:04:44.961 oneshot 00:04:44.961 tick 100 00:04:44.961 tick 100 00:04:44.961 tick 250 00:04:44.961 tick 100 00:04:44.961 tick 100 00:04:44.961 tick 250 00:04:44.961 tick 100 00:04:44.961 tick 500 00:04:44.961 tick 100 00:04:44.961 tick 100 00:04:44.961 tick 250 00:04:44.961 tick 100 00:04:44.961 tick 100 00:04:44.961 test_end 00:04:44.961 00:04:44.961 real 0m1.176s 00:04:44.961 user 0m1.093s 00:04:44.961 sys 0m0.078s 00:04:44.961 10:45:04 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.961 10:45:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:44.961 ************************************ 00:04:44.961 END TEST event_reactor 00:04:44.961 ************************************ 00:04:44.961 10:45:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.961 10:45:04 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:44.961 10:45:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.961 10:45:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.961 ************************************ 00:04:44.961 START TEST event_reactor_perf 00:04:44.961 ************************************ 00:04:44.961 10:45:04 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.961 [2024-11-15 10:45:04.201015] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:44.961 [2024-11-15 10:45:04.201120] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157875 ] 00:04:44.961 [2024-11-15 10:45:04.287901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.961 [2024-11-15 10:45:04.325176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.901 test_start 00:04:45.901 test_end 00:04:45.901 Performance: 535070 events per second 00:04:45.901 00:04:45.901 real 0m1.172s 00:04:45.901 user 0m1.082s 00:04:45.901 sys 0m0.086s 00:04:45.901 10:45:05 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.901 10:45:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.901 ************************************ 00:04:45.901 END TEST event_reactor_perf 00:04:45.901 ************************************ 00:04:45.901 10:45:05 event -- event/event.sh@49 -- # uname -s 00:04:45.901 10:45:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:45.901 10:45:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:45.901 10:45:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.901 10:45:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.901 10:45:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.162 ************************************ 00:04:46.162 START TEST event_scheduler 00:04:46.162 ************************************ 00:04:46.162 10:45:05 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:46.162 * Looking for test storage... 00:04:46.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.163 10:45:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.163 --rc genhtml_branch_coverage=1 00:04:46.163 --rc genhtml_function_coverage=1 00:04:46.163 --rc genhtml_legend=1 00:04:46.163 --rc geninfo_all_blocks=1 00:04:46.163 --rc geninfo_unexecuted_blocks=1 00:04:46.163 00:04:46.163 ' 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.163 --rc genhtml_branch_coverage=1 00:04:46.163 --rc genhtml_function_coverage=1 00:04:46.163 --rc genhtml_legend=1 00:04:46.163 --rc geninfo_all_blocks=1 00:04:46.163 --rc geninfo_unexecuted_blocks=1 00:04:46.163 00:04:46.163 ' 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.163 --rc genhtml_branch_coverage=1 00:04:46.163 --rc genhtml_function_coverage=1 00:04:46.163 --rc genhtml_legend=1 00:04:46.163 --rc geninfo_all_blocks=1 00:04:46.163 --rc geninfo_unexecuted_blocks=1 00:04:46.163 00:04:46.163 ' 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.163 --rc genhtml_branch_coverage=1 00:04:46.163 --rc genhtml_function_coverage=1 00:04:46.163 --rc genhtml_legend=1 00:04:46.163 --rc geninfo_all_blocks=1 00:04:46.163 --rc geninfo_unexecuted_blocks=1 00:04:46.163 00:04:46.163 ' 00:04:46.163 10:45:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:46.163 10:45:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=158231 00:04:46.163 10:45:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.163 10:45:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:46.163 10:45:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 158231 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 158231 ']' 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.163 10:45:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.163 [2024-11-15 10:45:05.688142] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:46.163 [2024-11-15 10:45:05.688213] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158231 ] 00:04:46.424 [2024-11-15 10:45:05.782671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:46.424 [2024-11-15 10:45:05.838780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.424 [2024-11-15 10:45:05.838939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.424 [2024-11-15 10:45:05.839101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.424 [2024-11-15 10:45:05.839102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:47.016 10:45:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 [2024-11-15 10:45:06.509498] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:47.016 [2024-11-15 10:45:06.509516] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:47.016 [2024-11-15 10:45:06.509525] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:47.016 [2024-11-15 10:45:06.509531] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:47.016 [2024-11-15 10:45:06.509537] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.016 10:45:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.016 10:45:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 [2024-11-15 10:45:06.573300] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:47.278 10:45:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:47.278 10:45:06 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.278 10:45:06 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 ************************************ 00:04:47.278 START TEST scheduler_create_thread 00:04:47.278 ************************************ 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 2 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 3 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 4 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 5 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 6 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 7 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 8 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.278 9 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.278 10:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.850 10 00:04:47.850 10:45:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.850 10:45:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:47.850 10:45:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.850 10:45:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 10:45:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.235 10:45:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.235 10:45:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.235 10:45:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.235 10:45:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.807 10:45:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.807 10:45:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.807 10:45:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.807 10:45:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.749 10:45:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.749 10:45:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:50.749 10:45:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:50.749 10:45:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.749 10:45:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.321 10:45:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.321 00:04:51.321 real 0m4.225s 00:04:51.321 user 0m0.024s 00:04:51.321 sys 0m0.008s 00:04:51.321 10:45:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.321 10:45:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.321 ************************************ 00:04:51.321 END TEST scheduler_create_thread 00:04:51.321 ************************************ 00:04:51.582 10:45:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:51.582 10:45:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 158231 00:04:51.582 10:45:10 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 158231 ']' 00:04:51.582 10:45:10 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 158231 00:04:51.582 10:45:10 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:51.582 10:45:10 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.582 10:45:10 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 158231 00:04:51.582 10:45:10 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:51.583 10:45:10 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:51.583 10:45:10 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 158231' 00:04:51.583 killing process with pid 158231 00:04:51.583 10:45:10 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 158231 00:04:51.583 10:45:10 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 158231 00:04:51.843 [2024-11-15 10:45:11.118987] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.843 00:04:51.843 real 0m5.843s 00:04:51.843 user 0m12.901s 00:04:51.843 sys 0m0.427s 00:04:51.843 10:45:11 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.843 10:45:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.843 ************************************ 00:04:51.843 END TEST event_scheduler 00:04:51.843 ************************************ 00:04:51.843 10:45:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:51.843 10:45:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:51.843 10:45:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.843 10:45:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.843 10:45:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.843 ************************************ 00:04:51.843 START TEST app_repeat 00:04:51.843 ************************************ 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=159784 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 159784' 00:04:51.843 Process app_repeat pid: 159784 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:51.843 spdk_app_start Round 0 00:04:51.843 10:45:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 159784 /var/tmp/spdk-nbd.sock 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 159784 ']' 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.843 10:45:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.104 [2024-11-15 10:45:11.374209] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:04:52.104 [2024-11-15 10:45:11.374265] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159784 ] 00:04:52.104 [2024-11-15 10:45:11.460163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.104 [2024-11-15 10:45:11.492182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.104 [2024-11-15 10:45:11.492182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.104 10:45:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.104 10:45:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:52.104 10:45:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.364 Malloc0 00:04:52.364 10:45:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.624 Malloc1 00:04:52.624 10:45:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.624 10:45:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.624 /dev/nbd0 00:04:52.624 10:45:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.624 10:45:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:52.624 10:45:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.884 1+0 records in 00:04:52.884 1+0 records out 00:04:52.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281242 s, 14.6 MB/s 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.884 /dev/nbd1 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.884 1+0 records in 00:04:52.884 1+0 records out 00:04:52.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306469 s, 13.4 MB/s 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:52.884 10:45:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.884 10:45:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.885 10:45:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.145 { 00:04:53.145 "nbd_device": "/dev/nbd0", 00:04:53.145 "bdev_name": "Malloc0" 00:04:53.145 }, 00:04:53.145 { 00:04:53.145 "nbd_device": "/dev/nbd1", 00:04:53.145 "bdev_name": "Malloc1" 00:04:53.145 } 00:04:53.145 ]' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.145 { 00:04:53.145 "nbd_device": "/dev/nbd0", 00:04:53.145 "bdev_name": "Malloc0" 00:04:53.145 }, 00:04:53.145 { 00:04:53.145 "nbd_device": "/dev/nbd1", 00:04:53.145 "bdev_name": "Malloc1" 00:04:53.145 } 00:04:53.145 ]' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.145 /dev/nbd1' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.145 /dev/nbd1' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.145 256+0 records in 00:04:53.145 256+0 records out 00:04:53.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118572 s, 88.4 MB/s 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.145 256+0 records in 00:04:53.145 256+0 records out 00:04:53.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012112 s, 86.6 MB/s 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.145 256+0 records in 00:04:53.145 256+0 records out 00:04:53.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013265 s, 79.0 MB/s 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.145 10:45:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.405 10:45:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.406 10:45:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.666 10:45:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.926 10:45:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.926 10:45:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.186 10:45:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.186 [2024-11-15 10:45:13.581529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.186 [2024-11-15 10:45:13.611593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.186 [2024-11-15 10:45:13.611621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.186 [2024-11-15 10:45:13.640553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.186 [2024-11-15 10:45:13.640586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.481 10:45:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.481 10:45:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:57.481 spdk_app_start Round 1 00:04:57.481 10:45:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 159784 /var/tmp/spdk-nbd.sock 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 159784 ']' 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.481 10:45:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:57.482 10:45:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.482 Malloc0 00:04:57.482 10:45:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.742 Malloc1 00:04:57.742 10:45:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.742 /dev/nbd0 00:04:57.742 10:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.002 1+0 records in 00:04:58.002 1+0 records out 00:04:58.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284582 s, 14.4 MB/s 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.002 /dev/nbd1 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.002 1+0 records in 00:04:58.002 1+0 records out 00:04:58.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280592 s, 14.6 MB/s 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:58.002 10:45:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.002 10:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.262 { 00:04:58.262 "nbd_device": "/dev/nbd0", 00:04:58.262 "bdev_name": "Malloc0" 00:04:58.262 }, 00:04:58.262 { 00:04:58.262 "nbd_device": "/dev/nbd1", 00:04:58.262 "bdev_name": "Malloc1" 00:04:58.262 } 00:04:58.262 ]' 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.262 { 00:04:58.262 "nbd_device": "/dev/nbd0", 00:04:58.262 "bdev_name": "Malloc0" 00:04:58.262 }, 00:04:58.262 { 00:04:58.262 "nbd_device": "/dev/nbd1", 00:04:58.262 "bdev_name": "Malloc1" 00:04:58.262 } 00:04:58.262 ]' 00:04:58.262 10:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.262 /dev/nbd1' 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.263 /dev/nbd1' 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.263 256+0 records in 00:04:58.263 256+0 records out 00:04:58.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127114 s, 82.5 MB/s 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.263 10:45:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.523 256+0 records in 00:04:58.523 256+0 records out 00:04:58.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121182 s, 86.5 MB/s 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.523 256+0 records in 00:04:58.523 256+0 records out 00:04:58.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012695 s, 82.6 MB/s 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.523 10:45:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.523 10:45:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.783 10:45:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.043 10:45:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.043 10:45:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.304 10:45:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.304 [2024-11-15 10:45:18.731683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.304 [2024-11-15 10:45:18.760838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.304 [2024-11-15 10:45:18.760838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.304 [2024-11-15 10:45:18.790429] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.304 [2024-11-15 10:45:18.790459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.597 10:45:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.597 10:45:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.597 spdk_app_start Round 2 00:05:02.597 10:45:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 159784 /var/tmp/spdk-nbd.sock 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 159784 ']' 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.597 10:45:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:02.597 10:45:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.597 Malloc0 00:05:02.597 10:45:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.857 Malloc1 00:05:02.857 10:45:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.857 10:45:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.117 /dev/nbd0 00:05:03.117 10:45:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.117 10:45:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.117 1+0 records in 00:05:03.117 1+0 records out 00:05:03.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293106 s, 14.0 MB/s 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:03.117 10:45:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:03.117 10:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.117 10:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.117 10:45:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.117 /dev/nbd1 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.378 1+0 records in 00:05:03.378 1+0 records out 00:05:03.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295135 s, 13.9 MB/s 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:03.378 10:45:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.378 { 00:05:03.378 "nbd_device": "/dev/nbd0", 00:05:03.378 "bdev_name": "Malloc0" 00:05:03.378 }, 00:05:03.378 { 00:05:03.378 "nbd_device": "/dev/nbd1", 00:05:03.378 "bdev_name": "Malloc1" 00:05:03.378 } 00:05:03.378 ]' 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.378 { 00:05:03.378 "nbd_device": "/dev/nbd0", 00:05:03.378 "bdev_name": "Malloc0" 00:05:03.378 }, 00:05:03.378 { 00:05:03.378 "nbd_device": "/dev/nbd1", 00:05:03.378 "bdev_name": "Malloc1" 00:05:03.378 } 00:05:03.378 ]' 00:05:03.378 10:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.639 /dev/nbd1' 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.639 /dev/nbd1' 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.639 256+0 records in 00:05:03.639 256+0 records out 00:05:03.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118476 s, 88.5 MB/s 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.639 256+0 records in 00:05:03.639 256+0 records out 00:05:03.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012386 s, 84.7 MB/s 00:05:03.639 10:45:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.640 256+0 records in 00:05:03.640 256+0 records out 00:05:03.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130776 s, 80.2 MB/s 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.640 10:45:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.640 10:45:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.640 10:45:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.640 10:45:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.640 10:45:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.640 10:45:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.640 10:45:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.900 10:45:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.161 10:45:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.161 10:45:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.421 10:45:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.421 [2024-11-15 10:45:23.872114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.421 [2024-11-15 10:45:23.901510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.421 [2024-11-15 10:45:23.901511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.421 [2024-11-15 10:45:23.930528] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.421 [2024-11-15 10:45:23.930558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.720 10:45:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 159784 /var/tmp/spdk-nbd.sock 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 159784 ']' 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:07.720 10:45:26 event.app_repeat -- event/event.sh@39 -- # killprocess 159784 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 159784 ']' 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 159784 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.720 10:45:26 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 159784 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 159784' 00:05:07.720 killing process with pid 159784 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@971 -- # kill 159784 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@976 -- # wait 159784 00:05:07.720 spdk_app_start is called in Round 0. 00:05:07.720 Shutdown signal received, stop current app iteration 00:05:07.720 Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 reinitialization... 00:05:07.720 spdk_app_start is called in Round 1. 00:05:07.720 Shutdown signal received, stop current app iteration 00:05:07.720 Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 reinitialization... 00:05:07.720 spdk_app_start is called in Round 2. 00:05:07.720 Shutdown signal received, stop current app iteration 00:05:07.720 Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 reinitialization... 00:05:07.720 spdk_app_start is called in Round 3. 00:05:07.720 Shutdown signal received, stop current app iteration 00:05:07.720 10:45:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.720 10:45:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.720 00:05:07.720 real 0m15.780s 00:05:07.720 user 0m34.686s 00:05:07.720 sys 0m2.286s 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.720 10:45:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.720 ************************************ 00:05:07.720 END TEST app_repeat 00:05:07.720 ************************************ 00:05:07.720 10:45:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.720 10:45:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.720 10:45:27 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.720 10:45:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.720 10:45:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.720 ************************************ 00:05:07.720 START TEST cpu_locks 00:05:07.720 ************************************ 00:05:07.720 10:45:27 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.982 * Looking for test storage... 00:05:07.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.982 10:45:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.982 --rc genhtml_branch_coverage=1 00:05:07.982 --rc genhtml_function_coverage=1 00:05:07.982 --rc genhtml_legend=1 00:05:07.982 --rc geninfo_all_blocks=1 00:05:07.982 --rc geninfo_unexecuted_blocks=1 00:05:07.982 00:05:07.982 ' 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.982 --rc genhtml_branch_coverage=1 00:05:07.982 --rc genhtml_function_coverage=1 00:05:07.982 --rc genhtml_legend=1 00:05:07.982 --rc geninfo_all_blocks=1 00:05:07.982 --rc geninfo_unexecuted_blocks=1 00:05:07.982 00:05:07.982 ' 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.982 --rc genhtml_branch_coverage=1 00:05:07.982 --rc genhtml_function_coverage=1 00:05:07.982 --rc genhtml_legend=1 00:05:07.982 --rc geninfo_all_blocks=1 00:05:07.982 --rc geninfo_unexecuted_blocks=1 00:05:07.982 00:05:07.982 ' 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.982 --rc genhtml_branch_coverage=1 00:05:07.982 --rc genhtml_function_coverage=1 00:05:07.982 --rc genhtml_legend=1 00:05:07.982 --rc geninfo_all_blocks=1 00:05:07.982 --rc geninfo_unexecuted_blocks=1 00:05:07.982 00:05:07.982 ' 00:05:07.982 10:45:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.982 10:45:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.982 10:45:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.982 10:45:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.982 10:45:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.982 ************************************ 00:05:07.982 START TEST default_locks 00:05:07.982 ************************************ 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=163198 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 163198 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 163198 ']' 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.982 10:45:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.242 [2024-11-15 10:45:27.518689] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:08.242 [2024-11-15 10:45:27.518752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163198 ] 00:05:08.242 [2024-11-15 10:45:27.608607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.242 [2024-11-15 10:45:27.650312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.811 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.811 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:08.811 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 163198 00:05:08.811 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 163198 00:05:08.811 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.379 lslocks: write error 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 163198 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 163198 ']' 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 163198 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 163198 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.379 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 163198' 00:05:09.379 killing process with pid 163198 00:05:09.380 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 163198 00:05:09.380 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 163198 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 163198 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 163198 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 163198 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 163198 ']' 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (163198) - No such process 00:05:09.640 ERROR: process (pid: 163198) is no longer running 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.640 00:05:09.640 real 0m1.512s 00:05:09.640 user 0m1.593s 00:05:09.640 sys 0m0.567s 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.640 10:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.640 ************************************ 00:05:09.640 END TEST default_locks 00:05:09.640 ************************************ 00:05:09.640 10:45:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.640 10:45:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.640 10:45:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.640 10:45:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.640 ************************************ 00:05:09.640 START TEST default_locks_via_rpc 00:05:09.640 ************************************ 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=163502 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 163502 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 163502 ']' 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.640 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.640 [2024-11-15 10:45:29.114298] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:09.640 [2024-11-15 10:45:29.114367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163502 ] 00:05:09.899 [2024-11-15 10:45:29.204134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.899 [2024-11-15 10:45:29.245214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 163502 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 163502 00:05:10.470 10:45:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 163502 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 163502 ']' 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 163502 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 163502 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 163502' 00:05:11.040 killing process with pid 163502 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 163502 00:05:11.040 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 163502 00:05:11.301 00:05:11.301 real 0m1.600s 00:05:11.301 user 0m1.726s 00:05:11.301 sys 0m0.562s 00:05:11.301 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.301 10:45:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 ************************************ 00:05:11.301 END TEST default_locks_via_rpc 00:05:11.301 ************************************ 00:05:11.301 10:45:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:11.301 10:45:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.301 10:45:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.301 10:45:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 ************************************ 00:05:11.301 START TEST non_locking_app_on_locked_coremask 00:05:11.301 ************************************ 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=163848 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 163848 /var/tmp/spdk.sock 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 163848 ']' 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.301 10:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.302 [2024-11-15 10:45:30.777758] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:11.302 [2024-11-15 10:45:30.777812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163848 ] 00:05:11.562 [2024-11-15 10:45:30.865097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.562 [2024-11-15 10:45:30.906320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=164131 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 164131 /var/tmp/spdk2.sock 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 164131 ']' 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.133 10:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.133 [2024-11-15 10:45:31.641159] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:12.133 [2024-11-15 10:45:31.641215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164131 ] 00:05:12.394 [2024-11-15 10:45:31.730673] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.394 [2024-11-15 10:45:31.730697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.394 [2024-11-15 10:45:31.792866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.963 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.963 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:12.963 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 163848 00:05:12.963 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 163848 00:05:12.963 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.532 lslocks: write error 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 163848 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 163848 ']' 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 163848 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 163848 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 163848' 00:05:13.532 killing process with pid 163848 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 163848 00:05:13.532 10:45:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 163848 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 164131 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 164131 ']' 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 164131 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 164131 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 164131' 00:05:14.100 killing process with pid 164131 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 164131 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 164131 00:05:14.100 00:05:14.100 real 0m2.853s 00:05:14.100 user 0m3.174s 00:05:14.100 sys 0m0.879s 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.100 10:45:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.100 ************************************ 00:05:14.100 END TEST non_locking_app_on_locked_coremask 00:05:14.100 ************************************ 00:05:14.100 10:45:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:14.100 10:45:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.100 10:45:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.100 10:45:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.361 ************************************ 00:05:14.361 START TEST locking_app_on_unlocked_coremask 00:05:14.361 ************************************ 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=164507 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 164507 /var/tmp/spdk.sock 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 164507 ']' 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.361 10:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.361 [2024-11-15 10:45:33.709440] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:14.361 [2024-11-15 10:45:33.709493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164507 ] 00:05:14.361 [2024-11-15 10:45:33.796139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.361 [2024-11-15 10:45:33.796163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.361 [2024-11-15 10:45:33.829801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=164741 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 164741 /var/tmp/spdk2.sock 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 164741 ']' 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.304 10:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.304 [2024-11-15 10:45:34.545308] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:15.305 [2024-11-15 10:45:34.545361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164741 ] 00:05:15.305 [2024-11-15 10:45:34.633543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.305 [2024-11-15 10:45:34.691622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.875 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.875 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:15.875 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 164741 00:05:15.875 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 164741 00:05:15.875 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.446 lslocks: write error 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 164507 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 164507 ']' 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 164507 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 164507 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 164507' 00:05:16.446 killing process with pid 164507 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 164507 00:05:16.446 10:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 164507 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 164741 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 164741 ']' 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 164741 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 164741 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 164741' 00:05:17.018 killing process with pid 164741 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 164741 00:05:17.018 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 164741 00:05:17.279 00:05:17.279 real 0m2.946s 00:05:17.279 user 0m3.257s 00:05:17.279 sys 0m0.927s 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 ************************************ 00:05:17.279 END TEST locking_app_on_unlocked_coremask 00:05:17.279 ************************************ 00:05:17.279 10:45:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:17.279 10:45:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.279 10:45:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.279 10:45:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 ************************************ 00:05:17.279 START TEST locking_app_on_locked_coremask 00:05:17.279 ************************************ 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=165214 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 165214 /var/tmp/spdk.sock 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 165214 ']' 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.279 10:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 [2024-11-15 10:45:36.725813] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:17.279 [2024-11-15 10:45:36.725863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165214 ] 00:05:17.540 [2024-11-15 10:45:36.809949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.540 [2024-11-15 10:45:36.839626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=165265 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 165265 /var/tmp/spdk2.sock 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 165265 /var/tmp/spdk2.sock 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 165265 /var/tmp/spdk2.sock 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 165265 ']' 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.111 10:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.111 [2024-11-15 10:45:37.562934] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:18.111 [2024-11-15 10:45:37.562988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165265 ] 00:05:18.371 [2024-11-15 10:45:37.652657] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 165214 has claimed it. 00:05:18.371 [2024-11-15 10:45:37.652692] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:18.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (165265) - No such process 00:05:18.941 ERROR: process (pid: 165265) is no longer running 00:05:18.941 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 165214 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 165214 00:05:18.942 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.202 lslocks: write error 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 165214 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 165214 ']' 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 165214 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 165214 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 165214' 00:05:19.202 killing process with pid 165214 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 165214 00:05:19.202 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 165214 00:05:19.463 00:05:19.463 real 0m2.190s 00:05:19.463 user 0m2.476s 00:05:19.463 sys 0m0.613s 00:05:19.463 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.463 10:45:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.463 ************************************ 00:05:19.463 END TEST locking_app_on_locked_coremask 00:05:19.463 ************************************ 00:05:19.463 10:45:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:19.463 10:45:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.463 10:45:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.463 10:45:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.463 ************************************ 00:05:19.463 START TEST locking_overlapped_coremask 00:05:19.463 ************************************ 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=165593 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 165593 /var/tmp/spdk.sock 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 165593 ']' 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.463 10:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.463 [2024-11-15 10:45:38.988959] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:19.463 [2024-11-15 10:45:38.989008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165593 ] 00:05:19.724 [2024-11-15 10:45:39.071636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.724 [2024-11-15 10:45:39.103594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.724 [2024-11-15 10:45:39.103754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.724 [2024-11-15 10:45:39.103866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=165929 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 165929 /var/tmp/spdk2.sock 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 165929 /var/tmp/spdk2.sock 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 165929 /var/tmp/spdk2.sock 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 165929 ']' 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.296 10:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.558 [2024-11-15 10:45:39.845847] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:20.558 [2024-11-15 10:45:39.845899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165929 ] 00:05:20.558 [2024-11-15 10:45:39.956897] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 165593 has claimed it. 00:05:20.558 [2024-11-15 10:45:39.956937] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (165929) - No such process 00:05:21.129 ERROR: process (pid: 165929) is no longer running 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 165593 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 165593 ']' 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 165593 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 165593 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 165593' 00:05:21.129 killing process with pid 165593 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 165593 00:05:21.129 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 165593 00:05:21.389 00:05:21.389 real 0m1.782s 00:05:21.389 user 0m5.161s 00:05:21.389 sys 0m0.397s 00:05:21.389 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.389 10:45:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.389 ************************************ 00:05:21.389 END TEST locking_overlapped_coremask 00:05:21.389 ************************************ 00:05:21.389 10:45:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:21.389 10:45:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.389 10:45:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.389 10:45:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.389 ************************************ 00:05:21.389 START TEST locking_overlapped_coremask_via_rpc 00:05:21.389 ************************************ 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=165980 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 165980 /var/tmp/spdk.sock 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 165980 ']' 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.390 10:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.390 [2024-11-15 10:45:40.847806] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:21.390 [2024-11-15 10:45:40.847859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165980 ] 00:05:21.650 [2024-11-15 10:45:40.933310] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.650 [2024-11-15 10:45:40.933332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.650 [2024-11-15 10:45:40.966202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.650 [2024-11-15 10:45:40.966351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.650 [2024-11-15 10:45:40.966353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=166295 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 166295 /var/tmp/spdk2.sock 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 166295 ']' 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.221 10:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.221 [2024-11-15 10:45:41.705316] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:22.221 [2024-11-15 10:45:41.705371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166295 ] 00:05:22.481 [2024-11-15 10:45:41.819013] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.481 [2024-11-15 10:45:41.819040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.481 [2024-11-15 10:45:41.896915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.481 [2024-11-15 10:45:41.897075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.481 [2024-11-15 10:45:41.897076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.053 [2024-11-15 10:45:42.509650] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 165980 has claimed it. 00:05:23.053 request: 00:05:23.053 { 00:05:23.053 "method": "framework_enable_cpumask_locks", 00:05:23.053 "req_id": 1 00:05:23.053 } 00:05:23.053 Got JSON-RPC error response 00:05:23.053 response: 00:05:23.053 { 00:05:23.053 "code": -32603, 00:05:23.053 "message": "Failed to claim CPU core: 2" 00:05:23.053 } 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 165980 /var/tmp/spdk.sock 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 165980 ']' 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.053 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 166295 /var/tmp/spdk2.sock 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 166295 ']' 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.314 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.575 00:05:23.575 real 0m2.093s 00:05:23.575 user 0m0.862s 00:05:23.575 sys 0m0.151s 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.575 10:45:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.575 ************************************ 00:05:23.575 END TEST locking_overlapped_coremask_via_rpc 00:05:23.575 ************************************ 00:05:23.575 10:45:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:23.575 10:45:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 165980 ]] 00:05:23.575 10:45:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 165980 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 165980 ']' 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 165980 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 165980 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 165980' 00:05:23.575 killing process with pid 165980 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 165980 00:05:23.575 10:45:42 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 165980 00:05:23.836 10:45:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 166295 ]] 00:05:23.836 10:45:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 166295 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 166295 ']' 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 166295 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 166295 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 166295' 00:05:23.836 killing process with pid 166295 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 166295 00:05:23.836 10:45:43 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 166295 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 165980 ]] 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 165980 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 165980 ']' 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 165980 00:05:24.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (165980) - No such process 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 165980 is not found' 00:05:24.096 Process with pid 165980 is not found 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 166295 ]] 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 166295 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 166295 ']' 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 166295 00:05:24.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (166295) - No such process 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 166295 is not found' 00:05:24.096 Process with pid 166295 is not found 00:05:24.096 10:45:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:24.096 00:05:24.096 real 0m16.239s 00:05:24.096 user 0m28.340s 00:05:24.096 sys 0m5.045s 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.096 10:45:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.096 ************************************ 00:05:24.096 END TEST cpu_locks 00:05:24.096 ************************************ 00:05:24.096 00:05:24.096 real 0m42.077s 00:05:24.096 user 1m22.473s 00:05:24.096 sys 0m8.451s 00:05:24.096 10:45:43 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.096 10:45:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.096 ************************************ 00:05:24.096 END TEST event 00:05:24.096 ************************************ 00:05:24.096 10:45:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:24.096 10:45:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.096 10:45:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.096 10:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.096 ************************************ 00:05:24.096 START TEST thread 00:05:24.096 ************************************ 00:05:24.097 10:45:43 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:24.357 * Looking for test storage... 00:05:24.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.357 10:45:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.357 10:45:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.357 10:45:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.357 10:45:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.357 10:45:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.357 10:45:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.357 10:45:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.357 10:45:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.357 10:45:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.357 10:45:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.357 10:45:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.357 10:45:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:24.357 10:45:43 thread -- scripts/common.sh@345 -- # : 1 00:05:24.357 10:45:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.357 10:45:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.357 10:45:43 thread -- scripts/common.sh@365 -- # decimal 1 00:05:24.357 10:45:43 thread -- scripts/common.sh@353 -- # local d=1 00:05:24.357 10:45:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.357 10:45:43 thread -- scripts/common.sh@355 -- # echo 1 00:05:24.357 10:45:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.357 10:45:43 thread -- scripts/common.sh@366 -- # decimal 2 00:05:24.357 10:45:43 thread -- scripts/common.sh@353 -- # local d=2 00:05:24.357 10:45:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.357 10:45:43 thread -- scripts/common.sh@355 -- # echo 2 00:05:24.357 10:45:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.357 10:45:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.357 10:45:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.357 10:45:43 thread -- scripts/common.sh@368 -- # return 0 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.357 --rc genhtml_branch_coverage=1 00:05:24.357 --rc genhtml_function_coverage=1 00:05:24.357 --rc genhtml_legend=1 00:05:24.357 --rc geninfo_all_blocks=1 00:05:24.357 --rc geninfo_unexecuted_blocks=1 00:05:24.357 00:05:24.357 ' 00:05:24.357 10:45:43 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.358 --rc genhtml_branch_coverage=1 00:05:24.358 --rc genhtml_function_coverage=1 00:05:24.358 --rc genhtml_legend=1 00:05:24.358 --rc geninfo_all_blocks=1 00:05:24.358 --rc geninfo_unexecuted_blocks=1 00:05:24.358 00:05:24.358 ' 00:05:24.358 10:45:43 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.358 --rc genhtml_branch_coverage=1 00:05:24.358 --rc genhtml_function_coverage=1 00:05:24.358 --rc genhtml_legend=1 00:05:24.358 --rc geninfo_all_blocks=1 00:05:24.358 --rc geninfo_unexecuted_blocks=1 00:05:24.358 00:05:24.358 ' 00:05:24.358 10:45:43 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.358 --rc genhtml_branch_coverage=1 00:05:24.358 --rc genhtml_function_coverage=1 00:05:24.358 --rc genhtml_legend=1 00:05:24.358 --rc geninfo_all_blocks=1 00:05:24.358 --rc geninfo_unexecuted_blocks=1 00:05:24.358 00:05:24.358 ' 00:05:24.358 10:45:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:24.358 10:45:43 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:24.358 10:45:43 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.358 10:45:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.358 ************************************ 00:05:24.358 START TEST thread_poller_perf 00:05:24.358 ************************************ 00:05:24.358 10:45:43 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:24.358 [2024-11-15 10:45:43.830854] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:24.358 [2024-11-15 10:45:43.830960] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166751 ] 00:05:24.618 [2024-11-15 10:45:43.920503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.618 [2024-11-15 10:45:43.952510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.618 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:25.557 [2024-11-15T09:45:45.084Z] ====================================== 00:05:25.557 [2024-11-15T09:45:45.084Z] busy:2406333306 (cyc) 00:05:25.557 [2024-11-15T09:45:45.084Z] total_run_count: 418000 00:05:25.557 [2024-11-15T09:45:45.084Z] tsc_hz: 2400000000 (cyc) 00:05:25.557 [2024-11-15T09:45:45.084Z] ====================================== 00:05:25.557 [2024-11-15T09:45:45.084Z] poller_cost: 5756 (cyc), 2398 (nsec) 00:05:25.557 00:05:25.557 real 0m1.176s 00:05:25.557 user 0m1.095s 00:05:25.557 sys 0m0.076s 00:05:25.557 10:45:44 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.557 10:45:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.557 ************************************ 00:05:25.557 END TEST thread_poller_perf 00:05:25.557 ************************************ 00:05:25.557 10:45:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.557 10:45:45 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:25.557 10:45:45 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.557 10:45:45 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.557 ************************************ 00:05:25.557 START TEST thread_poller_perf 00:05:25.557 ************************************ 00:05:25.557 10:45:45 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.557 [2024-11-15 10:45:45.087602] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:25.557 [2024-11-15 10:45:45.087705] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167103 ] 00:05:25.817 [2024-11-15 10:45:45.183819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.817 [2024-11-15 10:45:45.217843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.817 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:26.756 [2024-11-15T09:45:46.283Z] ====================================== 00:05:26.756 [2024-11-15T09:45:46.283Z] busy:2401665972 (cyc) 00:05:26.756 [2024-11-15T09:45:46.283Z] total_run_count: 5558000 00:05:26.756 [2024-11-15T09:45:46.283Z] tsc_hz: 2400000000 (cyc) 00:05:26.756 [2024-11-15T09:45:46.283Z] ====================================== 00:05:26.756 [2024-11-15T09:45:46.283Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:26.756 00:05:26.756 real 0m1.181s 00:05:26.756 user 0m1.096s 00:05:26.756 sys 0m0.082s 00:05:26.756 10:45:46 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.756 10:45:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.756 ************************************ 00:05:26.756 END TEST thread_poller_perf 00:05:26.756 ************************************ 00:05:26.756 10:45:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:26.756 00:05:26.756 real 0m2.714s 00:05:26.756 user 0m2.365s 00:05:26.756 sys 0m0.364s 00:05:26.756 10:45:46 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.756 10:45:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.756 ************************************ 00:05:26.756 END TEST thread 00:05:26.756 ************************************ 00:05:27.017 10:45:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:27.017 10:45:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:27.017 10:45:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.017 10:45:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.017 10:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.017 ************************************ 00:05:27.017 START TEST app_cmdline 00:05:27.017 ************************************ 00:05:27.017 10:45:46 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:27.017 * Looking for test storage... 00:05:27.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:27.017 10:45:46 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.017 10:45:46 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.017 10:45:46 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.277 10:45:46 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.277 10:45:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:27.277 10:45:46 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.277 10:45:46 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.277 --rc genhtml_branch_coverage=1 00:05:27.277 --rc genhtml_function_coverage=1 00:05:27.277 --rc genhtml_legend=1 00:05:27.277 --rc geninfo_all_blocks=1 00:05:27.277 --rc geninfo_unexecuted_blocks=1 00:05:27.277 00:05:27.277 ' 00:05:27.277 10:45:46 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.277 --rc genhtml_branch_coverage=1 00:05:27.277 --rc genhtml_function_coverage=1 00:05:27.277 --rc genhtml_legend=1 00:05:27.277 --rc geninfo_all_blocks=1 00:05:27.277 --rc geninfo_unexecuted_blocks=1 00:05:27.277 00:05:27.277 ' 00:05:27.277 10:45:46 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.277 --rc genhtml_branch_coverage=1 00:05:27.277 --rc genhtml_function_coverage=1 00:05:27.277 --rc genhtml_legend=1 00:05:27.277 --rc geninfo_all_blocks=1 00:05:27.277 --rc geninfo_unexecuted_blocks=1 00:05:27.277 00:05:27.277 ' 00:05:27.277 10:45:46 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.277 --rc genhtml_branch_coverage=1 00:05:27.277 --rc genhtml_function_coverage=1 00:05:27.277 --rc genhtml_legend=1 00:05:27.277 --rc geninfo_all_blocks=1 00:05:27.277 --rc geninfo_unexecuted_blocks=1 00:05:27.277 00:05:27.277 ' 00:05:27.277 10:45:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:27.278 10:45:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=167500 00:05:27.278 10:45:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 167500 00:05:27.278 10:45:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:27.278 10:45:46 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 167500 ']' 00:05:27.278 10:45:46 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.278 10:45:46 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.278 10:45:46 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.278 10:45:46 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.278 10:45:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 [2024-11-15 10:45:46.626632] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:27.278 [2024-11-15 10:45:46.626686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167500 ] 00:05:27.278 [2024-11-15 10:45:46.687607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.278 [2024-11-15 10:45:46.717579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.537 10:45:46 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.537 10:45:46 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:27.537 10:45:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:27.537 { 00:05:27.537 "version": "SPDK v25.01-pre git sha1 8c4dec1aa", 00:05:27.537 "fields": { 00:05:27.537 "major": 25, 00:05:27.537 "minor": 1, 00:05:27.537 "patch": 0, 00:05:27.537 "suffix": "-pre", 00:05:27.537 "commit": "8c4dec1aa" 00:05:27.537 } 00:05:27.537 } 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.798 request: 00:05:27.798 { 00:05:27.798 "method": "env_dpdk_get_mem_stats", 00:05:27.798 "req_id": 1 00:05:27.798 } 00:05:27.798 Got JSON-RPC error response 00:05:27.798 response: 00:05:27.798 { 00:05:27.798 "code": -32601, 00:05:27.798 "message": "Method not found" 00:05:27.798 } 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.798 10:45:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 167500 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 167500 ']' 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 167500 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.798 10:45:47 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 167500 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 167500' 00:05:28.058 killing process with pid 167500 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@971 -- # kill 167500 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@976 -- # wait 167500 00:05:28.058 00:05:28.058 real 0m1.197s 00:05:28.058 user 0m1.497s 00:05:28.058 sys 0m0.395s 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.058 10:45:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.058 ************************************ 00:05:28.058 END TEST app_cmdline 00:05:28.058 ************************************ 00:05:28.317 10:45:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:28.317 10:45:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.317 10:45:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.317 10:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:28.317 ************************************ 00:05:28.317 START TEST version 00:05:28.317 ************************************ 00:05:28.317 10:45:47 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:28.317 * Looking for test storage... 00:05:28.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:28.317 10:45:47 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.317 10:45:47 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.317 10:45:47 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.317 10:45:47 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.317 10:45:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.318 10:45:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.318 10:45:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.318 10:45:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.318 10:45:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.318 10:45:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.318 10:45:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.318 10:45:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.318 10:45:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.318 10:45:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.318 10:45:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.318 10:45:47 version -- scripts/common.sh@344 -- # case "$op" in 00:05:28.318 10:45:47 version -- scripts/common.sh@345 -- # : 1 00:05:28.318 10:45:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.318 10:45:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.318 10:45:47 version -- scripts/common.sh@365 -- # decimal 1 00:05:28.318 10:45:47 version -- scripts/common.sh@353 -- # local d=1 00:05:28.318 10:45:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.318 10:45:47 version -- scripts/common.sh@355 -- # echo 1 00:05:28.318 10:45:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.318 10:45:47 version -- scripts/common.sh@366 -- # decimal 2 00:05:28.318 10:45:47 version -- scripts/common.sh@353 -- # local d=2 00:05:28.318 10:45:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.318 10:45:47 version -- scripts/common.sh@355 -- # echo 2 00:05:28.318 10:45:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.318 10:45:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.318 10:45:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.318 10:45:47 version -- scripts/common.sh@368 -- # return 0 00:05:28.318 10:45:47 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.318 10:45:47 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.318 --rc genhtml_branch_coverage=1 00:05:28.318 --rc genhtml_function_coverage=1 00:05:28.318 --rc genhtml_legend=1 00:05:28.318 --rc geninfo_all_blocks=1 00:05:28.318 --rc geninfo_unexecuted_blocks=1 00:05:28.318 00:05:28.318 ' 00:05:28.318 10:45:47 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.318 --rc genhtml_branch_coverage=1 00:05:28.318 --rc genhtml_function_coverage=1 00:05:28.318 --rc genhtml_legend=1 00:05:28.318 --rc geninfo_all_blocks=1 00:05:28.318 --rc geninfo_unexecuted_blocks=1 00:05:28.318 00:05:28.318 ' 00:05:28.318 10:45:47 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.318 --rc genhtml_branch_coverage=1 00:05:28.318 --rc genhtml_function_coverage=1 00:05:28.318 --rc genhtml_legend=1 00:05:28.318 --rc geninfo_all_blocks=1 00:05:28.318 --rc geninfo_unexecuted_blocks=1 00:05:28.318 00:05:28.318 ' 00:05:28.318 10:45:47 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.318 --rc genhtml_branch_coverage=1 00:05:28.318 --rc genhtml_function_coverage=1 00:05:28.318 --rc genhtml_legend=1 00:05:28.318 --rc geninfo_all_blocks=1 00:05:28.318 --rc geninfo_unexecuted_blocks=1 00:05:28.318 00:05:28.318 ' 00:05:28.318 10:45:47 version -- app/version.sh@17 -- # get_header_version major 00:05:28.318 10:45:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.318 10:45:47 version -- app/version.sh@14 -- # cut -f2 00:05:28.318 10:45:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.318 10:45:47 version -- app/version.sh@17 -- # major=25 00:05:28.579 10:45:47 version -- app/version.sh@18 -- # get_header_version minor 00:05:28.579 10:45:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.579 10:45:47 version -- app/version.sh@14 -- # cut -f2 00:05:28.579 10:45:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.579 10:45:47 version -- app/version.sh@18 -- # minor=1 00:05:28.579 10:45:47 version -- app/version.sh@19 -- # get_header_version patch 00:05:28.579 10:45:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.579 10:45:47 version -- app/version.sh@14 -- # cut -f2 00:05:28.579 10:45:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.579 10:45:47 version -- app/version.sh@19 -- # patch=0 00:05:28.579 10:45:47 version -- app/version.sh@20 -- # get_header_version suffix 00:05:28.579 10:45:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.579 10:45:47 version -- app/version.sh@14 -- # cut -f2 00:05:28.579 10:45:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.579 10:45:47 version -- app/version.sh@20 -- # suffix=-pre 00:05:28.579 10:45:47 version -- app/version.sh@22 -- # version=25.1 00:05:28.579 10:45:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:28.579 10:45:47 version -- app/version.sh@28 -- # version=25.1rc0 00:05:28.579 10:45:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:28.579 10:45:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:28.579 10:45:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:28.579 10:45:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:28.579 00:05:28.579 real 0m0.277s 00:05:28.579 user 0m0.181s 00:05:28.579 sys 0m0.146s 00:05:28.579 10:45:47 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.579 10:45:47 version -- common/autotest_common.sh@10 -- # set +x 00:05:28.579 ************************************ 00:05:28.579 END TEST version 00:05:28.579 ************************************ 00:05:28.579 10:45:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:28.579 10:45:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:28.579 10:45:47 -- spdk/autotest.sh@194 -- # uname -s 00:05:28.579 10:45:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:28.579 10:45:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:28.579 10:45:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:28.579 10:45:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:28.579 10:45:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:28.579 10:45:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:28.579 10:45:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.579 10:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:28.579 10:45:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:28.579 10:45:48 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:28.579 10:45:48 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:28.579 10:45:48 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:28.579 10:45:48 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:28.579 10:45:48 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:28.579 10:45:48 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:28.579 10:45:48 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:28.579 10:45:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.579 10:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:28.579 ************************************ 00:05:28.579 START TEST nvmf_tcp 00:05:28.579 ************************************ 00:05:28.579 10:45:48 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:28.839 * Looking for test storage... 00:05:28.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.839 10:45:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.839 --rc genhtml_branch_coverage=1 00:05:28.839 --rc genhtml_function_coverage=1 00:05:28.839 --rc genhtml_legend=1 00:05:28.839 --rc geninfo_all_blocks=1 00:05:28.839 --rc geninfo_unexecuted_blocks=1 00:05:28.839 00:05:28.839 ' 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.839 --rc genhtml_branch_coverage=1 00:05:28.839 --rc genhtml_function_coverage=1 00:05:28.839 --rc genhtml_legend=1 00:05:28.839 --rc geninfo_all_blocks=1 00:05:28.839 --rc geninfo_unexecuted_blocks=1 00:05:28.839 00:05:28.839 ' 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.839 --rc genhtml_branch_coverage=1 00:05:28.839 --rc genhtml_function_coverage=1 00:05:28.839 --rc genhtml_legend=1 00:05:28.839 --rc geninfo_all_blocks=1 00:05:28.839 --rc geninfo_unexecuted_blocks=1 00:05:28.839 00:05:28.839 ' 00:05:28.839 10:45:48 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.839 --rc genhtml_branch_coverage=1 00:05:28.839 --rc genhtml_function_coverage=1 00:05:28.839 --rc genhtml_legend=1 00:05:28.839 --rc geninfo_all_blocks=1 00:05:28.839 --rc geninfo_unexecuted_blocks=1 00:05:28.839 00:05:28.839 ' 00:05:28.839 10:45:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:28.839 10:45:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:28.839 10:45:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:28.840 10:45:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:28.840 10:45:48 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.840 10:45:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.840 ************************************ 00:05:28.840 START TEST nvmf_target_core 00:05:28.840 ************************************ 00:05:28.840 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:29.102 * Looking for test storage... 00:05:29.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:29.102 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.103 --rc genhtml_branch_coverage=1 00:05:29.103 --rc genhtml_function_coverage=1 00:05:29.103 --rc genhtml_legend=1 00:05:29.103 --rc geninfo_all_blocks=1 00:05:29.103 --rc geninfo_unexecuted_blocks=1 00:05:29.103 00:05:29.103 ' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.103 --rc genhtml_branch_coverage=1 00:05:29.103 --rc genhtml_function_coverage=1 00:05:29.103 --rc genhtml_legend=1 00:05:29.103 --rc geninfo_all_blocks=1 00:05:29.103 --rc geninfo_unexecuted_blocks=1 00:05:29.103 00:05:29.103 ' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.103 --rc genhtml_branch_coverage=1 00:05:29.103 --rc genhtml_function_coverage=1 00:05:29.103 --rc genhtml_legend=1 00:05:29.103 --rc geninfo_all_blocks=1 00:05:29.103 --rc geninfo_unexecuted_blocks=1 00:05:29.103 00:05:29.103 ' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.103 --rc genhtml_branch_coverage=1 00:05:29.103 --rc genhtml_function_coverage=1 00:05:29.103 --rc genhtml_legend=1 00:05:29.103 --rc geninfo_all_blocks=1 00:05:29.103 --rc geninfo_unexecuted_blocks=1 00:05:29.103 00:05:29.103 ' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:29.103 ************************************ 00:05:29.103 START TEST nvmf_abort 00:05:29.103 ************************************ 00:05:29.103 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:29.365 * Looking for test storage... 00:05:29.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.365 --rc genhtml_branch_coverage=1 00:05:29.365 --rc genhtml_function_coverage=1 00:05:29.365 --rc genhtml_legend=1 00:05:29.365 --rc geninfo_all_blocks=1 00:05:29.365 --rc geninfo_unexecuted_blocks=1 00:05:29.365 00:05:29.365 ' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.365 --rc genhtml_branch_coverage=1 00:05:29.365 --rc genhtml_function_coverage=1 00:05:29.365 --rc genhtml_legend=1 00:05:29.365 --rc geninfo_all_blocks=1 00:05:29.365 --rc geninfo_unexecuted_blocks=1 00:05:29.365 00:05:29.365 ' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.365 --rc genhtml_branch_coverage=1 00:05:29.365 --rc genhtml_function_coverage=1 00:05:29.365 --rc genhtml_legend=1 00:05:29.365 --rc geninfo_all_blocks=1 00:05:29.365 --rc geninfo_unexecuted_blocks=1 00:05:29.365 00:05:29.365 ' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.365 --rc genhtml_branch_coverage=1 00:05:29.365 --rc genhtml_function_coverage=1 00:05:29.365 --rc genhtml_legend=1 00:05:29.365 --rc geninfo_all_blocks=1 00:05:29.365 --rc geninfo_unexecuted_blocks=1 00:05:29.365 00:05:29.365 ' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.365 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:29.366 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:37.503 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:37.503 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:37.503 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:37.503 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:37.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:37.504 10:45:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:37.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:37.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:05:37.504 00:05:37.504 --- 10.0.0.2 ping statistics --- 00:05:37.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:37.504 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:37.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:37.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:05:37.504 00:05:37.504 --- 10.0.0.1 ping statistics --- 00:05:37.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:37.504 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=171680 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 171680 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 171680 ']' 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.504 10:45:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 [2024-11-15 10:45:56.415487] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:37.504 [2024-11-15 10:45:56.415556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:37.504 [2024-11-15 10:45:56.518459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.504 [2024-11-15 10:45:56.572557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:37.504 [2024-11-15 10:45:56.572618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:37.504 [2024-11-15 10:45:56.572627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.504 [2024-11-15 10:45:56.572634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.504 [2024-11-15 10:45:56.572641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:37.504 [2024-11-15 10:45:56.574785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.504 [2024-11-15 10:45:56.574923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.504 [2024-11-15 10:45:56.574923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.765 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.766 [2024-11-15 10:45:57.292337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 Malloc0 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 Delay0 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 [2024-11-15 10:45:57.381946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.026 10:45:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:38.286 [2024-11-15 10:45:57.574793] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:40.198 Initializing NVMe Controllers 00:05:40.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:40.198 controller IO queue size 128 less than required 00:05:40.198 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:40.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:40.198 Initialization complete. Launching workers. 00:05:40.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28526 00:05:40.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28591, failed to submit 62 00:05:40.198 success 28530, unsuccessful 61, failed 0 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:40.198 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:40.198 rmmod nvme_tcp 00:05:40.459 rmmod nvme_fabrics 00:05:40.459 rmmod nvme_keyring 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 171680 ']' 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 171680 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 171680 ']' 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 171680 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 171680 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 171680' 00:05:40.459 killing process with pid 171680 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 171680 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 171680 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:40.459 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:43.149 00:05:43.149 real 0m13.491s 00:05:43.149 user 0m14.164s 00:05:43.149 sys 0m6.742s 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:43.149 ************************************ 00:05:43.149 END TEST nvmf_abort 00:05:43.149 ************************************ 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:43.149 ************************************ 00:05:43.149 START TEST nvmf_ns_hotplug_stress 00:05:43.149 ************************************ 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:43.149 * Looking for test storage... 00:05:43.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.149 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:43.150 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:51.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:51.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:51.367 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:51.368 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:51.368 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:51.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:51.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:05:51.368 00:05:51.368 --- 10.0.0.2 ping statistics --- 00:05:51.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.368 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:51.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:51.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:05:51.368 00:05:51.368 --- 10.0.0.1 ping statistics --- 00:05:51.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.368 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=176736 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 176736 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 176736 ']' 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.368 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 [2024-11-15 10:46:09.915296] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:05:51.368 [2024-11-15 10:46:09.915364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:51.368 [2024-11-15 10:46:10.018271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.368 [2024-11-15 10:46:10.073581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:51.368 [2024-11-15 10:46:10.073634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:51.368 [2024-11-15 10:46:10.073648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:51.368 [2024-11-15 10:46:10.073655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:51.368 [2024-11-15 10:46:10.073661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:51.368 [2024-11-15 10:46:10.075636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.368 [2024-11-15 10:46:10.075859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.368 [2024-11-15 10:46:10.075859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:51.368 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:51.369 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:51.629 [2024-11-15 10:46:10.954070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.629 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:51.889 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:51.889 [2024-11-15 10:46:11.361290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.889 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:52.150 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:52.411 Malloc0 00:05:52.411 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:52.671 Delay0 00:05:52.671 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.671 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:52.932 NULL1 00:05:52.932 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:53.192 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=177356 00:05:53.192 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:53.192 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:53.192 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.452 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.452 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:53.452 10:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:53.713 true 00:05:53.713 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:53.713 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.973 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.973 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:53.973 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:54.233 true 00:05:54.233 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:54.233 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.494 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.754 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:54.754 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:54.754 true 00:05:54.754 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:54.754 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.015 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.277 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:55.277 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:55.277 true 00:05:55.277 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:55.277 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.543 10:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.803 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:55.803 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:55.803 true 00:05:56.065 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:56.065 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.065 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.326 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:56.326 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:56.587 true 00:05:56.587 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:56.587 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.587 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.847 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:56.847 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:57.108 true 00:05:57.108 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:57.108 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.108 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.369 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:57.369 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:57.630 true 00:05:57.630 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:57.630 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.630 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.891 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:57.891 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:58.151 true 00:05:58.151 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:58.151 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.411 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.411 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:58.411 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:58.671 true 00:05:58.671 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:58.671 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.932 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.932 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:58.932 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:59.191 true 00:05:59.191 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:59.191 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.451 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.451 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:59.451 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:59.711 true 00:05:59.711 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:05:59.711 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.971 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.231 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:00.231 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:00.231 true 00:06:00.231 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:00.231 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.490 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.750 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:00.750 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:00.750 true 00:06:00.750 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:00.750 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.010 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.270 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:01.270 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:01.270 true 00:06:01.531 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:01.531 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.531 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.790 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:01.790 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:02.049 true 00:06:02.049 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:02.049 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.049 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.364 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:02.364 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:02.624 true 00:06:02.624 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:02.624 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.624 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.884 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:02.884 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:03.143 true 00:06:03.143 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:03.143 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.402 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.402 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:03.402 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:03.663 true 00:06:03.663 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:03.663 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.933 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.933 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:03.933 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:04.200 true 00:06:04.200 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:04.200 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.460 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.460 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:04.460 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:04.720 true 00:06:04.720 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:04.720 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.980 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.240 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:05.240 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:05.240 true 00:06:05.240 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:05.240 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.500 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.760 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:05.760 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:05.760 true 00:06:05.760 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:05.760 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.020 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.280 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:06.280 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:06.540 true 00:06:06.540 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:06.540 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.540 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.801 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:06.801 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:07.060 true 00:06:07.060 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:07.060 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.319 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.319 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:07.319 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:07.579 true 00:06:07.579 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:07.579 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.840 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.840 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:07.840 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:08.101 true 00:06:08.101 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:08.101 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.362 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.622 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:08.622 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:08.622 true 00:06:08.622 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:08.622 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.882 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.143 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:09.143 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:09.143 true 00:06:09.143 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:09.143 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.403 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.663 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:09.663 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:09.663 true 00:06:09.923 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:09.923 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.923 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.183 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:10.183 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:10.442 true 00:06:10.442 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:10.442 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.442 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.702 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:10.702 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:10.961 true 00:06:10.961 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:10.961 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.221 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.221 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:11.221 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:11.482 true 00:06:11.482 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:11.482 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:12.002 true 00:06:12.002 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:12.002 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.262 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.262 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:12.262 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:12.521 true 00:06:12.521 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:12.521 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.782 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.043 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:13.043 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:13.043 true 00:06:13.043 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:13.044 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.304 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.564 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:13.565 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:13.565 true 00:06:13.825 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:13.825 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.825 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.085 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:14.085 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:14.345 true 00:06:14.345 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:14.345 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.345 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.605 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:14.605 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:14.864 true 00:06:14.864 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:14.864 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.124 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.124 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:15.124 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:15.383 true 00:06:15.383 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:15.383 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.643 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.643 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:15.643 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:15.902 true 00:06:15.902 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:15.902 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.165 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.425 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:16.425 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:16.425 true 00:06:16.425 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:16.425 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.687 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.948 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:16.948 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:16.948 true 00:06:16.948 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:16.948 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.208 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.468 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:17.468 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:17.468 true 00:06:17.468 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:17.468 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.729 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.990 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:17.990 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:17.990 true 00:06:18.250 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:18.250 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.250 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.511 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:18.511 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:18.771 true 00:06:18.771 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:18.771 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.771 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.031 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:19.031 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:19.291 true 00:06:19.291 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:19.291 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.550 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.550 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:19.551 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:19.811 true 00:06:19.811 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:19.811 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.071 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.071 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:20.071 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:20.332 true 00:06:20.332 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:20.332 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.591 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.851 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:20.851 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:20.851 true 00:06:20.851 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:20.851 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.110 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.370 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:21.370 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:21.370 true 00:06:21.630 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:21.630 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.630 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.890 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:21.890 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:22.150 true 00:06:22.150 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:22.150 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.150 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.409 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:22.409 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:22.669 true 00:06:22.669 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:22.669 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.929 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.929 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:22.929 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:23.188 true 00:06:23.188 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:23.188 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.448 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.448 Initializing NVMe Controllers 00:06:23.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:23.448 Controller IO queue size 128, less than required. 00:06:23.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:23.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:23.448 Initialization complete. Launching workers. 00:06:23.448 ======================================================== 00:06:23.448 Latency(us) 00:06:23.448 Device Information : IOPS MiB/s Average min max 00:06:23.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30837.13 15.06 4150.72 1200.73 8013.01 00:06:23.448 ======================================================== 00:06:23.448 Total : 30837.13 15.06 4150.72 1200.73 8013.01 00:06:23.448 00:06:23.448 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:23.448 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:23.709 true 00:06:23.709 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 177356 00:06:23.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (177356) - No such process 00:06:23.709 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 177356 00:06:23.709 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.968 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:24.228 null0 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.228 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:24.487 null1 00:06:24.487 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:24.487 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.487 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:24.487 null2 00:06:24.746 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:24.746 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.746 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:24.746 null3 00:06:24.746 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:24.746 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.746 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:25.005 null4 00:06:25.005 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.005 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.005 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:25.264 null5 00:06:25.264 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.264 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.265 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:25.265 null6 00:06:25.265 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.265 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.265 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:25.525 null7 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 183983 183984 183986 183988 183990 183992 183993 183995 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:25.525 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.526 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.785 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:26.046 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:26.307 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.308 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.568 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.568 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:26.569 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:26.569 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:26.569 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:26.569 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:26.569 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.569 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:26.829 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.089 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.348 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.607 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.607 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.607 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.607 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.607 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.607 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.608 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.608 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.866 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.866 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.867 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.127 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.390 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.651 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.651 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.912 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.173 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.437 rmmod nvme_tcp 00:06:29.437 rmmod nvme_fabrics 00:06:29.437 rmmod nvme_keyring 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 176736 ']' 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 176736 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 176736 ']' 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 176736 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 176736 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 176736' 00:06:29.437 killing process with pid 176736 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 176736 00:06:29.437 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 176736 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.699 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.615 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.615 00:06:31.615 real 0m48.972s 00:06:31.615 user 3m18.980s 00:06:31.615 sys 0m17.112s 00:06:31.615 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.615 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.615 ************************************ 00:06:31.615 END TEST nvmf_ns_hotplug_stress 00:06:31.615 ************************************ 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.877 ************************************ 00:06:31.877 START TEST nvmf_delete_subsystem 00:06:31.877 ************************************ 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:31.877 * Looking for test storage... 00:06:31.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:31.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.877 --rc genhtml_branch_coverage=1 00:06:31.877 --rc genhtml_function_coverage=1 00:06:31.877 --rc genhtml_legend=1 00:06:31.877 --rc geninfo_all_blocks=1 00:06:31.877 --rc geninfo_unexecuted_blocks=1 00:06:31.877 00:06:31.877 ' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:31.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.877 --rc genhtml_branch_coverage=1 00:06:31.877 --rc genhtml_function_coverage=1 00:06:31.877 --rc genhtml_legend=1 00:06:31.877 --rc geninfo_all_blocks=1 00:06:31.877 --rc geninfo_unexecuted_blocks=1 00:06:31.877 00:06:31.877 ' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:31.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.877 --rc genhtml_branch_coverage=1 00:06:31.877 --rc genhtml_function_coverage=1 00:06:31.877 --rc genhtml_legend=1 00:06:31.877 --rc geninfo_all_blocks=1 00:06:31.877 --rc geninfo_unexecuted_blocks=1 00:06:31.877 00:06:31.877 ' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:31.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.877 --rc genhtml_branch_coverage=1 00:06:31.877 --rc genhtml_function_coverage=1 00:06:31.877 --rc genhtml_legend=1 00:06:31.877 --rc geninfo_all_blocks=1 00:06:31.877 --rc geninfo_unexecuted_blocks=1 00:06:31.877 00:06:31.877 ' 00:06:31.877 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.138 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:32.138 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:32.139 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:40.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:40.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:40.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:40.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.286 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:40.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:06:40.287 00:06:40.287 --- 10.0.0.2 ping statistics --- 00:06:40.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.287 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:06:40.287 00:06:40.287 --- 10.0.0.1 ping statistics --- 00:06:40.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.287 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=189163 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 189163 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 189163 ']' 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.287 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 [2024-11-15 10:46:59.036806] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:06:40.287 [2024-11-15 10:46:59.036869] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.287 [2024-11-15 10:46:59.136442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.287 [2024-11-15 10:46:59.188208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.287 [2024-11-15 10:46:59.188264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.287 [2024-11-15 10:46:59.188273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.287 [2024-11-15 10:46:59.188280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.287 [2024-11-15 10:46:59.188286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.287 [2024-11-15 10:46:59.189896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.287 [2024-11-15 10:46:59.189898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 [2024-11-15 10:46:59.914215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 [2024-11-15 10:46:59.938528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 NULL1 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 Delay0 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=189354 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:40.548 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:40.548 [2024-11-15 10:47:00.075636] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:42.460 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:42.460 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.460 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 Read completed with error (sct=0, sc=8) 00:06:43.032 starting I/O failed: -6 00:06:43.032 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 [2024-11-15 10:47:02.321969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b2c0 is same with the state(6) to be set 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 [2024-11-15 10:47:02.323203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b680 is same with the state(6) to be set 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 starting I/O failed: -6 00:06:43.033 [2024-11-15 10:47:02.328172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ec8000c40 is same with the state(6) to be set 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.033 Write completed with error (sct=0, sc=8) 00:06:43.033 Read completed with error (sct=0, sc=8) 00:06:43.976 [2024-11-15 10:47:03.297832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 [2024-11-15 10:47:03.325545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b4a0 is same with the state(6) to be set 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Read completed with error (sct=0, sc=8) 00:06:43.976 Write completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 [2024-11-15 10:47:03.325907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b860 is same with the state(6) to be set 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 [2024-11-15 10:47:03.328947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ec800d020 is same with the state(6) to be set 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Write completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 Read completed with error (sct=0, sc=8) 00:06:43.977 [2024-11-15 10:47:03.330136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ec800d7c0 is same with the state(6) to be set 00:06:43.977 Initializing NVMe Controllers 00:06:43.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:43.977 Controller IO queue size 128, less than required. 00:06:43.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:43.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:43.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:43.977 Initialization complete. Launching workers. 00:06:43.977 ======================================================== 00:06:43.977 Latency(us) 00:06:43.977 Device Information : IOPS MiB/s Average min max 00:06:43.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.75 0.08 885989.02 555.80 1007043.99 00:06:43.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.30 0.08 913310.42 348.17 1011313.47 00:06:43.977 ======================================================== 00:06:43.977 Total : 336.05 0.16 899184.24 348.17 1011313.47 00:06:43.977 00:06:43.977 [2024-11-15 10:47:03.330867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2c9a0 (9): Bad file descriptor 00:06:43.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:43.977 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.977 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:43.977 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 189354 00:06:43.977 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 189354 00:06:44.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (189354) - No such process 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 189354 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 189354 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 189354 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.547 [2024-11-15 10:47:03.863313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=190193 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:44.547 10:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.547 [2024-11-15 10:47:03.968569] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:45.116 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.116 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:45.116 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.376 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.376 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:45.376 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.945 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.945 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:45.945 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.515 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.515 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:46.515 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.084 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.084 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:47.084 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.653 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.653 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:47.653 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.653 Initializing NVMe Controllers 00:06:47.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:47.653 Controller IO queue size 128, less than required. 00:06:47.653 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:47.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:47.653 Initialization complete. Launching workers. 00:06:47.653 ======================================================== 00:06:47.653 Latency(us) 00:06:47.653 Device Information : IOPS MiB/s Average min max 00:06:47.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004028.06 1000209.71 1042315.79 00:06:47.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002923.01 1000209.51 1008712.05 00:06:47.653 ======================================================== 00:06:47.653 Total : 256.00 0.12 1003475.54 1000209.51 1042315.79 00:06:47.653 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 190193 00:06:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (190193) - No such process 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 190193 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.912 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.912 rmmod nvme_tcp 00:06:48.172 rmmod nvme_fabrics 00:06:48.172 rmmod nvme_keyring 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 189163 ']' 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 189163 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 189163 ']' 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 189163 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 189163 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 189163' 00:06:48.172 killing process with pid 189163 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 189163 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 189163 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.172 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:50.715 00:06:50.715 real 0m18.541s 00:06:50.715 user 0m31.230s 00:06:50.715 sys 0m6.890s 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.715 ************************************ 00:06:50.715 END TEST nvmf_delete_subsystem 00:06:50.715 ************************************ 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.715 ************************************ 00:06:50.715 START TEST nvmf_host_management 00:06:50.715 ************************************ 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:50.715 * Looking for test storage... 00:06:50.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.715 10:47:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.715 --rc genhtml_branch_coverage=1 00:06:50.715 --rc genhtml_function_coverage=1 00:06:50.715 --rc genhtml_legend=1 00:06:50.715 --rc geninfo_all_blocks=1 00:06:50.715 --rc geninfo_unexecuted_blocks=1 00:06:50.715 00:06:50.715 ' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.715 --rc genhtml_branch_coverage=1 00:06:50.715 --rc genhtml_function_coverage=1 00:06:50.715 --rc genhtml_legend=1 00:06:50.715 --rc geninfo_all_blocks=1 00:06:50.715 --rc geninfo_unexecuted_blocks=1 00:06:50.715 00:06:50.715 ' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.715 --rc genhtml_branch_coverage=1 00:06:50.715 --rc genhtml_function_coverage=1 00:06:50.715 --rc genhtml_legend=1 00:06:50.715 --rc geninfo_all_blocks=1 00:06:50.715 --rc geninfo_unexecuted_blocks=1 00:06:50.715 00:06:50.715 ' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.715 --rc genhtml_branch_coverage=1 00:06:50.715 --rc genhtml_function_coverage=1 00:06:50.715 --rc genhtml_legend=1 00:06:50.715 --rc geninfo_all_blocks=1 00:06:50.715 --rc geninfo_unexecuted_blocks=1 00:06:50.715 00:06:50.715 ' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.715 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.716 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:58.850 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:58.850 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:58.850 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:58.850 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.850 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:06:58.851 00:06:58.851 --- 10.0.0.2 ping statistics --- 00:06:58.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.851 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:06:58.851 00:06:58.851 --- 10.0.0.1 ping statistics --- 00:06:58.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.851 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=195222 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 195222 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 195222 ']' 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.851 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.851 [2024-11-15 10:47:17.670428] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:06:58.851 [2024-11-15 10:47:17.670495] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.851 [2024-11-15 10:47:17.773002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.851 [2024-11-15 10:47:17.825999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.851 [2024-11-15 10:47:17.826056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.851 [2024-11-15 10:47:17.826065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.851 [2024-11-15 10:47:17.826072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.851 [2024-11-15 10:47:17.826082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.851 [2024-11-15 10:47:17.828169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.851 [2024-11-15 10:47:17.828330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.851 [2024-11-15 10:47:17.828490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.851 [2024-11-15 10:47:17.828490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.111 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 [2024-11-15 10:47:18.538275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 Malloc0 00:06:59.112 [2024-11-15 10:47:18.625167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.112 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=195301 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 195301 /var/tmp/bdevperf.sock 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 195301 ']' 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:59.372 { 00:06:59.372 "params": { 00:06:59.372 "name": "Nvme$subsystem", 00:06:59.372 "trtype": "$TEST_TRANSPORT", 00:06:59.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:59.372 "adrfam": "ipv4", 00:06:59.372 "trsvcid": "$NVMF_PORT", 00:06:59.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:59.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:59.372 "hdgst": ${hdgst:-false}, 00:06:59.372 "ddgst": ${ddgst:-false} 00:06:59.372 }, 00:06:59.372 "method": "bdev_nvme_attach_controller" 00:06:59.372 } 00:06:59.372 EOF 00:06:59.372 )") 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:59.372 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:59.372 "params": { 00:06:59.372 "name": "Nvme0", 00:06:59.372 "trtype": "tcp", 00:06:59.372 "traddr": "10.0.0.2", 00:06:59.372 "adrfam": "ipv4", 00:06:59.372 "trsvcid": "4420", 00:06:59.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:59.372 "hdgst": false, 00:06:59.372 "ddgst": false 00:06:59.372 }, 00:06:59.372 "method": "bdev_nvme_attach_controller" 00:06:59.372 }' 00:06:59.372 [2024-11-15 10:47:18.736706] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:06:59.372 [2024-11-15 10:47:18.736781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195301 ] 00:06:59.372 [2024-11-15 10:47:18.830737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.372 [2024-11-15 10:47:18.884682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.632 Running I/O for 10 seconds... 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.206 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.206 [2024-11-15 10:47:19.641642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.206 [2024-11-15 10:47:19.641975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.641982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.641988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.641995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8150 is same with the state(6) to be set 00:07:00.207 [2024-11-15 10:47:19.642274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.207 [2024-11-15 10:47:19.642822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.207 [2024-11-15 10:47:19.642831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.642983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.642992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.208 [2024-11-15 10:47:19.643524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.208 [2024-11-15 10:47:19.643533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930ee0 is same with the state(6) to be set 00:07:00.208 [2024-11-15 10:47:19.644879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:00.208 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.209 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:00.209 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.209 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:00.209 00:07:00.209 Latency(us) 00:07:00.209 [2024-11-15T09:47:19.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.209 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:00.209 Job: Nvme0n1 ended in about 0.54 seconds with error 00:07:00.209 Verification LBA range: start 0x0 length 0x400 00:07:00.209 Nvme0n1 : 0.54 1540.18 96.26 118.48 0.00 37585.72 4805.97 34734.08 00:07:00.209 [2024-11-15T09:47:19.736Z] =================================================================================================================== 00:07:00.209 [2024-11-15T09:47:19.736Z] Total : 1540.18 96.26 118.48 0.00 37585.72 4805.97 34734.08 00:07:00.209 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.209 [2024-11-15 10:47:19.647135] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.209 [2024-11-15 10:47:19.647180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718000 (9): Bad file descriptor 00:07:00.209 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.209 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:00.209 [2024-11-15 10:47:19.662912] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 195301 00:07:01.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (195301) - No such process 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:01.149 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:01.150 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:01.150 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:01.150 { 00:07:01.150 "params": { 00:07:01.150 "name": "Nvme$subsystem", 00:07:01.150 "trtype": "$TEST_TRANSPORT", 00:07:01.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:01.150 "adrfam": "ipv4", 00:07:01.150 "trsvcid": "$NVMF_PORT", 00:07:01.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:01.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:01.150 "hdgst": ${hdgst:-false}, 00:07:01.150 "ddgst": ${ddgst:-false} 00:07:01.150 }, 00:07:01.150 "method": "bdev_nvme_attach_controller" 00:07:01.150 } 00:07:01.150 EOF 00:07:01.150 )") 00:07:01.150 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:01.150 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:01.409 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:01.409 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:01.409 "params": { 00:07:01.409 "name": "Nvme0", 00:07:01.409 "trtype": "tcp", 00:07:01.409 "traddr": "10.0.0.2", 00:07:01.409 "adrfam": "ipv4", 00:07:01.409 "trsvcid": "4420", 00:07:01.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:01.409 "hdgst": false, 00:07:01.409 "ddgst": false 00:07:01.409 }, 00:07:01.409 "method": "bdev_nvme_attach_controller" 00:07:01.409 }' 00:07:01.409 [2024-11-15 10:47:20.726364] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:07:01.409 [2024-11-15 10:47:20.726421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195795 ] 00:07:01.409 [2024-11-15 10:47:20.812614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.409 [2024-11-15 10:47:20.848360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.670 Running I/O for 1 seconds... 00:07:02.610 1817.00 IOPS, 113.56 MiB/s 00:07:02.610 Latency(us) 00:07:02.610 [2024-11-15T09:47:22.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.610 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.610 Verification LBA range: start 0x0 length 0x400 00:07:02.610 Nvme0n1 : 1.01 1867.27 116.70 0.00 0.00 33580.44 955.73 32986.45 00:07:02.610 [2024-11-15T09:47:22.137Z] =================================================================================================================== 00:07:02.610 [2024-11-15T09:47:22.137Z] Total : 1867.27 116.70 0.00 0.00 33580.44 955.73 32986.45 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.610 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.870 rmmod nvme_tcp 00:07:02.870 rmmod nvme_fabrics 00:07:02.870 rmmod nvme_keyring 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 195222 ']' 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 195222 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 195222 ']' 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 195222 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:02.870 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 195222 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 195222' 00:07:02.871 killing process with pid 195222 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 195222 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 195222 00:07:02.871 [2024-11-15 10:47:22.362031] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.871 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:05.416 00:07:05.416 real 0m14.651s 00:07:05.416 user 0m22.860s 00:07:05.416 sys 0m6.823s 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.416 ************************************ 00:07:05.416 END TEST nvmf_host_management 00:07:05.416 ************************************ 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.416 ************************************ 00:07:05.416 START TEST nvmf_lvol 00:07:05.416 ************************************ 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:05.416 * Looking for test storage... 00:07:05.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.416 --rc genhtml_branch_coverage=1 00:07:05.416 --rc genhtml_function_coverage=1 00:07:05.416 --rc genhtml_legend=1 00:07:05.416 --rc geninfo_all_blocks=1 00:07:05.416 --rc geninfo_unexecuted_blocks=1 00:07:05.416 00:07:05.416 ' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.416 --rc genhtml_branch_coverage=1 00:07:05.416 --rc genhtml_function_coverage=1 00:07:05.416 --rc genhtml_legend=1 00:07:05.416 --rc geninfo_all_blocks=1 00:07:05.416 --rc geninfo_unexecuted_blocks=1 00:07:05.416 00:07:05.416 ' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.416 --rc genhtml_branch_coverage=1 00:07:05.416 --rc genhtml_function_coverage=1 00:07:05.416 --rc genhtml_legend=1 00:07:05.416 --rc geninfo_all_blocks=1 00:07:05.416 --rc geninfo_unexecuted_blocks=1 00:07:05.416 00:07:05.416 ' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.416 --rc genhtml_branch_coverage=1 00:07:05.416 --rc genhtml_function_coverage=1 00:07:05.416 --rc genhtml_legend=1 00:07:05.416 --rc geninfo_all_blocks=1 00:07:05.416 --rc geninfo_unexecuted_blocks=1 00:07:05.416 00:07:05.416 ' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.416 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.417 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.568 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:13.569 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:13.569 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:13.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:13.569 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.569 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:07:13.569 00:07:13.569 --- 10.0.0.2 ping statistics --- 00:07:13.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.569 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:07:13.569 00:07:13.569 --- 10.0.0.1 ping statistics --- 00:07:13.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.569 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=200312 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 200312 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 200312 ']' 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.569 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 [2024-11-15 10:47:32.342543] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:07:13.569 [2024-11-15 10:47:32.342618] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.569 [2024-11-15 10:47:32.443595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.569 [2024-11-15 10:47:32.495660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.569 [2024-11-15 10:47:32.495711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.569 [2024-11-15 10:47:32.495721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.569 [2024-11-15 10:47:32.495728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.569 [2024-11-15 10:47:32.495735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.569 [2024-11-15 10:47:32.497588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.570 [2024-11-15 10:47:32.497725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.570 [2024-11-15 10:47:32.497833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.831 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:14.093 [2024-11-15 10:47:33.384326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.093 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:14.353 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:14.353 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:14.354 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:14.354 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:14.614 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:14.875 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=51b032d9-2b82-4b4e-bc13-c5b89cda3aba 00:07:14.875 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 51b032d9-2b82-4b4e-bc13-c5b89cda3aba lvol 20 00:07:15.136 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=00d88e1b-a48b-4ed7-aebd-d137ce8f32a7 00:07:15.136 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.136 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00d88e1b-a48b-4ed7-aebd-d137ce8f32a7 00:07:15.409 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:15.670 [2024-11-15 10:47:35.024890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.670 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.930 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=201015 00:07:15.930 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:15.930 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:16.868 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 00d88e1b-a48b-4ed7-aebd-d137ce8f32a7 MY_SNAPSHOT 00:07:17.128 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=de6c0793-10d3-4d7d-a2a4-a8124201371e 00:07:17.128 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 00d88e1b-a48b-4ed7-aebd-d137ce8f32a7 30 00:07:17.388 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone de6c0793-10d3-4d7d-a2a4-a8124201371e MY_CLONE 00:07:17.388 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=77ee6166-d18d-42c7-84c7-d30a11d5930b 00:07:17.388 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 77ee6166-d18d-42c7-84c7-d30a11d5930b 00:07:17.957 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 201015 00:07:26.216 Initializing NVMe Controllers 00:07:26.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:26.216 Controller IO queue size 128, less than required. 00:07:26.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:26.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:26.216 Initialization complete. Launching workers. 00:07:26.216 ======================================================== 00:07:26.216 Latency(us) 00:07:26.216 Device Information : IOPS MiB/s Average min max 00:07:26.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16536.90 64.60 7743.04 1502.02 46793.65 00:07:26.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17475.90 68.27 7326.67 1851.81 43647.72 00:07:26.216 ======================================================== 00:07:26.216 Total : 34012.80 132.86 7529.11 1502.02 46793.65 00:07:26.216 00:07:26.216 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:26.478 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00d88e1b-a48b-4ed7-aebd-d137ce8f32a7 00:07:26.739 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51b032d9-2b82-4b4e-bc13-c5b89cda3aba 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.999 rmmod nvme_tcp 00:07:26.999 rmmod nvme_fabrics 00:07:26.999 rmmod nvme_keyring 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 200312 ']' 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 200312 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 200312 ']' 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 200312 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 200312 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 200312' 00:07:26.999 killing process with pid 200312 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 200312 00:07:26.999 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 200312 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.260 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.168 00:07:29.168 real 0m24.100s 00:07:29.168 user 1m5.422s 00:07:29.168 sys 0m8.671s 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.168 ************************************ 00:07:29.168 END TEST nvmf_lvol 00:07:29.168 ************************************ 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.168 10:47:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.429 ************************************ 00:07:29.430 START TEST nvmf_lvs_grow 00:07:29.430 ************************************ 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.430 * Looking for test storage... 00:07:29.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:29.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.430 --rc genhtml_branch_coverage=1 00:07:29.430 --rc genhtml_function_coverage=1 00:07:29.430 --rc genhtml_legend=1 00:07:29.430 --rc geninfo_all_blocks=1 00:07:29.430 --rc geninfo_unexecuted_blocks=1 00:07:29.430 00:07:29.430 ' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:29.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.430 --rc genhtml_branch_coverage=1 00:07:29.430 --rc genhtml_function_coverage=1 00:07:29.430 --rc genhtml_legend=1 00:07:29.430 --rc geninfo_all_blocks=1 00:07:29.430 --rc geninfo_unexecuted_blocks=1 00:07:29.430 00:07:29.430 ' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:29.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.430 --rc genhtml_branch_coverage=1 00:07:29.430 --rc genhtml_function_coverage=1 00:07:29.430 --rc genhtml_legend=1 00:07:29.430 --rc geninfo_all_blocks=1 00:07:29.430 --rc geninfo_unexecuted_blocks=1 00:07:29.430 00:07:29.430 ' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:29.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.430 --rc genhtml_branch_coverage=1 00:07:29.430 --rc genhtml_function_coverage=1 00:07:29.430 --rc genhtml_legend=1 00:07:29.430 --rc geninfo_all_blocks=1 00:07:29.430 --rc geninfo_unexecuted_blocks=1 00:07:29.430 00:07:29.430 ' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.430 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.431 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.431 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.431 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.431 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.691 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:37.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:37.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:37.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:37.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.829 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:07:37.830 00:07:37.830 --- 10.0.0.2 ping statistics --- 00:07:37.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.830 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:07:37.830 00:07:37.830 --- 10.0.0.1 ping statistics --- 00:07:37.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.830 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=207401 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 207401 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 207401 ']' 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.830 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.830 [2024-11-15 10:47:56.520356] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:07:37.830 [2024-11-15 10:47:56.520420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.830 [2024-11-15 10:47:56.627323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.830 [2024-11-15 10:47:56.678996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.830 [2024-11-15 10:47:56.679052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.830 [2024-11-15 10:47:56.679060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.830 [2024-11-15 10:47:56.679067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.830 [2024-11-15 10:47:56.679075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.830 [2024-11-15 10:47:56.679909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.830 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.830 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:37.830 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.830 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.830 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.091 [2024-11-15 10:47:57.540831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:38.091 ************************************ 00:07:38.091 START TEST lvs_grow_clean 00:07:38.091 ************************************ 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:38.091 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.352 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.352 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.352 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:38.352 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:38.612 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:38.612 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:38.612 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:38.872 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:38.872 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:38.872 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 lvol 150 00:07:38.872 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4 00:07:38.872 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.872 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:39.133 [2024-11-15 10:47:58.559955] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:39.133 [2024-11-15 10:47:58.560030] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:39.133 true 00:07:39.133 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:39.133 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:39.395 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:39.395 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:39.656 10:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4 00:07:39.656 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:39.916 [2024-11-15 10:47:59.298303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.916 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=208112 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 208112 /var/tmp/bdevperf.sock 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 208112 ']' 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:40.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.177 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:40.177 [2024-11-15 10:47:59.555655] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:07:40.177 [2024-11-15 10:47:59.555721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid208112 ] 00:07:40.177 [2024-11-15 10:47:59.647765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.177 [2024-11-15 10:47:59.699707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.119 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.119 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:41.119 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:41.380 Nvme0n1 00:07:41.380 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:41.380 [ 00:07:41.380 { 00:07:41.380 "name": "Nvme0n1", 00:07:41.380 "aliases": [ 00:07:41.380 "e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4" 00:07:41.380 ], 00:07:41.380 "product_name": "NVMe disk", 00:07:41.380 "block_size": 4096, 00:07:41.380 "num_blocks": 38912, 00:07:41.380 "uuid": "e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4", 00:07:41.380 "numa_id": 0, 00:07:41.380 "assigned_rate_limits": { 00:07:41.380 "rw_ios_per_sec": 0, 00:07:41.380 "rw_mbytes_per_sec": 0, 00:07:41.380 "r_mbytes_per_sec": 0, 00:07:41.380 "w_mbytes_per_sec": 0 00:07:41.380 }, 00:07:41.380 "claimed": false, 00:07:41.380 "zoned": false, 00:07:41.380 "supported_io_types": { 00:07:41.380 "read": true, 00:07:41.380 "write": true, 00:07:41.380 "unmap": true, 00:07:41.380 "flush": true, 00:07:41.380 "reset": true, 00:07:41.380 "nvme_admin": true, 00:07:41.380 "nvme_io": true, 00:07:41.380 "nvme_io_md": false, 00:07:41.380 "write_zeroes": true, 00:07:41.380 "zcopy": false, 00:07:41.380 "get_zone_info": false, 00:07:41.380 "zone_management": false, 00:07:41.380 "zone_append": false, 00:07:41.380 "compare": true, 00:07:41.380 "compare_and_write": true, 00:07:41.380 "abort": true, 00:07:41.380 "seek_hole": false, 00:07:41.380 "seek_data": false, 00:07:41.380 "copy": true, 00:07:41.380 "nvme_iov_md": false 00:07:41.380 }, 00:07:41.380 "memory_domains": [ 00:07:41.380 { 00:07:41.380 "dma_device_id": "system", 00:07:41.380 "dma_device_type": 1 00:07:41.380 } 00:07:41.380 ], 00:07:41.380 "driver_specific": { 00:07:41.380 "nvme": [ 00:07:41.380 { 00:07:41.380 "trid": { 00:07:41.380 "trtype": "TCP", 00:07:41.380 "adrfam": "IPv4", 00:07:41.380 "traddr": "10.0.0.2", 00:07:41.380 "trsvcid": "4420", 00:07:41.380 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:41.380 }, 00:07:41.380 "ctrlr_data": { 00:07:41.380 "cntlid": 1, 00:07:41.380 "vendor_id": "0x8086", 00:07:41.380 "model_number": "SPDK bdev Controller", 00:07:41.380 "serial_number": "SPDK0", 00:07:41.380 "firmware_revision": "25.01", 00:07:41.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.380 "oacs": { 00:07:41.380 "security": 0, 00:07:41.380 "format": 0, 00:07:41.380 "firmware": 0, 00:07:41.380 "ns_manage": 0 00:07:41.380 }, 00:07:41.380 "multi_ctrlr": true, 00:07:41.380 "ana_reporting": false 00:07:41.380 }, 00:07:41.380 "vs": { 00:07:41.380 "nvme_version": "1.3" 00:07:41.380 }, 00:07:41.380 "ns_data": { 00:07:41.380 "id": 1, 00:07:41.380 "can_share": true 00:07:41.380 } 00:07:41.380 } 00:07:41.380 ], 00:07:41.380 "mp_policy": "active_passive" 00:07:41.380 } 00:07:41.380 } 00:07:41.380 ] 00:07:41.380 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=208489 00:07:41.380 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:41.380 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.640 Running I/O for 10 seconds... 00:07:42.583 Latency(us) 00:07:42.583 [2024-11-15T09:48:02.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.583 Nvme0n1 : 1.00 24824.00 96.97 0.00 0.00 0.00 0.00 0.00 00:07:42.583 [2024-11-15T09:48:02.110Z] =================================================================================================================== 00:07:42.583 [2024-11-15T09:48:02.110Z] Total : 24824.00 96.97 0.00 0.00 0.00 0.00 0.00 00:07:42.583 00:07:43.525 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:43.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.525 Nvme0n1 : 2.00 25076.00 97.95 0.00 0.00 0.00 0.00 0.00 00:07:43.525 [2024-11-15T09:48:03.052Z] =================================================================================================================== 00:07:43.525 [2024-11-15T09:48:03.052Z] Total : 25076.00 97.95 0.00 0.00 0.00 0.00 0.00 00:07:43.525 00:07:43.525 true 00:07:43.786 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:43.786 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:43.786 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:43.786 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:43.786 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 208489 00:07:44.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.728 Nvme0n1 : 3.00 25177.00 98.35 0.00 0.00 0.00 0.00 0.00 00:07:44.728 [2024-11-15T09:48:04.255Z] =================================================================================================================== 00:07:44.728 [2024-11-15T09:48:04.255Z] Total : 25177.00 98.35 0.00 0.00 0.00 0.00 0.00 00:07:44.728 00:07:45.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.670 Nvme0n1 : 4.00 25250.25 98.63 0.00 0.00 0.00 0.00 0.00 00:07:45.670 [2024-11-15T09:48:05.197Z] =================================================================================================================== 00:07:45.670 [2024-11-15T09:48:05.197Z] Total : 25250.25 98.63 0.00 0.00 0.00 0.00 0.00 00:07:45.670 00:07:46.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.611 Nvme0n1 : 5.00 25294.60 98.81 0.00 0.00 0.00 0.00 0.00 00:07:46.611 [2024-11-15T09:48:06.138Z] =================================================================================================================== 00:07:46.611 [2024-11-15T09:48:06.138Z] Total : 25294.60 98.81 0.00 0.00 0.00 0.00 0.00 00:07:46.611 00:07:47.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.553 Nvme0n1 : 6.00 25335.00 98.96 0.00 0.00 0.00 0.00 0.00 00:07:47.553 [2024-11-15T09:48:07.080Z] =================================================================================================================== 00:07:47.553 [2024-11-15T09:48:07.080Z] Total : 25335.00 98.96 0.00 0.00 0.00 0.00 0.00 00:07:47.553 00:07:48.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.493 Nvme0n1 : 7.00 25363.57 99.08 0.00 0.00 0.00 0.00 0.00 00:07:48.493 [2024-11-15T09:48:08.020Z] =================================================================================================================== 00:07:48.493 [2024-11-15T09:48:08.020Z] Total : 25363.57 99.08 0.00 0.00 0.00 0.00 0.00 00:07:48.493 00:07:49.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.878 Nvme0n1 : 8.00 25384.75 99.16 0.00 0.00 0.00 0.00 0.00 00:07:49.878 [2024-11-15T09:48:09.405Z] =================================================================================================================== 00:07:49.878 [2024-11-15T09:48:09.405Z] Total : 25384.75 99.16 0.00 0.00 0.00 0.00 0.00 00:07:49.878 00:07:50.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.818 Nvme0n1 : 9.00 25401.67 99.23 0.00 0.00 0.00 0.00 0.00 00:07:50.818 [2024-11-15T09:48:10.345Z] =================================================================================================================== 00:07:50.818 [2024-11-15T09:48:10.345Z] Total : 25401.67 99.23 0.00 0.00 0.00 0.00 0.00 00:07:50.818 00:07:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.759 Nvme0n1 : 10.00 25408.70 99.25 0.00 0.00 0.00 0.00 0.00 00:07:51.759 [2024-11-15T09:48:11.286Z] =================================================================================================================== 00:07:51.759 [2024-11-15T09:48:11.286Z] Total : 25408.70 99.25 0.00 0.00 0.00 0.00 0.00 00:07:51.759 00:07:51.759 00:07:51.759 Latency(us) 00:07:51.759 [2024-11-15T09:48:11.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.759 Nvme0n1 : 10.00 25413.52 99.27 0.00 0.00 5033.33 2512.21 11851.09 00:07:51.759 [2024-11-15T09:48:11.286Z] =================================================================================================================== 00:07:51.759 [2024-11-15T09:48:11.286Z] Total : 25413.52 99.27 0.00 0.00 5033.33 2512.21 11851.09 00:07:51.759 { 00:07:51.759 "results": [ 00:07:51.759 { 00:07:51.759 "job": "Nvme0n1", 00:07:51.759 "core_mask": "0x2", 00:07:51.759 "workload": "randwrite", 00:07:51.759 "status": "finished", 00:07:51.759 "queue_depth": 128, 00:07:51.759 "io_size": 4096, 00:07:51.759 "runtime": 10.00314, 00:07:51.759 "iops": 25413.520154671434, 00:07:51.759 "mibps": 99.27156310418529, 00:07:51.760 "io_failed": 0, 00:07:51.760 "io_timeout": 0, 00:07:51.760 "avg_latency_us": 5033.326965390189, 00:07:51.760 "min_latency_us": 2512.213333333333, 00:07:51.760 "max_latency_us": 11851.093333333334 00:07:51.760 } 00:07:51.760 ], 00:07:51.760 "core_count": 1 00:07:51.760 } 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 208112 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 208112 ']' 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 208112 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 208112 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 208112' 00:07:51.760 killing process with pid 208112 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 208112 00:07:51.760 Received shutdown signal, test time was about 10.000000 seconds 00:07:51.760 00:07:51.760 Latency(us) 00:07:51.760 [2024-11-15T09:48:11.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.760 [2024-11-15T09:48:11.287Z] =================================================================================================================== 00:07:51.760 [2024-11-15T09:48:11.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 208112 00:07:51.760 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.018 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.018 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:52.018 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:52.278 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:52.278 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:52.278 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.538 [2024-11-15 10:48:11.850756] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:52.538 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:52.538 request: 00:07:52.538 { 00:07:52.538 "uuid": "e319443c-9d1c-4a43-8f17-fd7f39a24d26", 00:07:52.538 "method": "bdev_lvol_get_lvstores", 00:07:52.538 "req_id": 1 00:07:52.538 } 00:07:52.538 Got JSON-RPC error response 00:07:52.538 response: 00:07:52.538 { 00:07:52.538 "code": -19, 00:07:52.538 "message": "No such device" 00:07:52.538 } 00:07:52.538 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:52.538 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.538 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.538 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.538 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.798 aio_bdev 00:07:52.798 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4 00:07:52.799 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4 00:07:52.799 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:52.799 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:52.799 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:52.799 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:52.799 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:53.059 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4 -t 2000 00:07:53.059 [ 00:07:53.059 { 00:07:53.059 "name": "e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4", 00:07:53.059 "aliases": [ 00:07:53.059 "lvs/lvol" 00:07:53.059 ], 00:07:53.059 "product_name": "Logical Volume", 00:07:53.059 "block_size": 4096, 00:07:53.059 "num_blocks": 38912, 00:07:53.059 "uuid": "e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4", 00:07:53.059 "assigned_rate_limits": { 00:07:53.059 "rw_ios_per_sec": 0, 00:07:53.059 "rw_mbytes_per_sec": 0, 00:07:53.059 "r_mbytes_per_sec": 0, 00:07:53.059 "w_mbytes_per_sec": 0 00:07:53.059 }, 00:07:53.059 "claimed": false, 00:07:53.059 "zoned": false, 00:07:53.059 "supported_io_types": { 00:07:53.059 "read": true, 00:07:53.059 "write": true, 00:07:53.059 "unmap": true, 00:07:53.059 "flush": false, 00:07:53.059 "reset": true, 00:07:53.059 "nvme_admin": false, 00:07:53.059 "nvme_io": false, 00:07:53.059 "nvme_io_md": false, 00:07:53.059 "write_zeroes": true, 00:07:53.059 "zcopy": false, 00:07:53.059 "get_zone_info": false, 00:07:53.059 "zone_management": false, 00:07:53.060 "zone_append": false, 00:07:53.060 "compare": false, 00:07:53.060 "compare_and_write": false, 00:07:53.060 "abort": false, 00:07:53.060 "seek_hole": true, 00:07:53.060 "seek_data": true, 00:07:53.060 "copy": false, 00:07:53.060 "nvme_iov_md": false 00:07:53.060 }, 00:07:53.060 "driver_specific": { 00:07:53.060 "lvol": { 00:07:53.060 "lvol_store_uuid": "e319443c-9d1c-4a43-8f17-fd7f39a24d26", 00:07:53.060 "base_bdev": "aio_bdev", 00:07:53.060 "thin_provision": false, 00:07:53.060 "num_allocated_clusters": 38, 00:07:53.060 "snapshot": false, 00:07:53.060 "clone": false, 00:07:53.060 "esnap_clone": false 00:07:53.060 } 00:07:53.060 } 00:07:53.060 } 00:07:53.060 ] 00:07:53.060 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:53.060 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:53.060 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:53.321 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:53.321 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:53.321 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:53.582 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:53.582 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e978a3fd-cf06-4764-b9fe-50ee7c2dd6b4 00:07:53.582 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e319443c-9d1c-4a43-8f17-fd7f39a24d26 00:07:53.842 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.102 00:07:54.102 real 0m15.805s 00:07:54.102 user 0m15.566s 00:07:54.102 sys 0m1.399s 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 ************************************ 00:07:54.102 END TEST lvs_grow_clean 00:07:54.102 ************************************ 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.102 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 ************************************ 00:07:54.102 START TEST lvs_grow_dirty 00:07:54.103 ************************************ 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.103 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.363 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:54.363 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:54.363 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d2c86f26-089b-4448-b312-b0715a7bc007 00:07:54.363 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:07:54.363 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:54.623 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:54.623 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:54.623 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2c86f26-089b-4448-b312-b0715a7bc007 lvol 150 00:07:54.883 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:07:54.883 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.883 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:54.883 [2024-11-15 10:48:14.379097] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:54.883 [2024-11-15 10:48:14.379138] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:54.883 true 00:07:54.883 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:07:54.883 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:55.143 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:55.143 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.405 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:07:55.405 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.666 [2024-11-15 10:48:15.037005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.666 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=211774 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 211774 /var/tmp/bdevperf.sock 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 211774 ']' 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.925 [2024-11-15 10:48:15.252774] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:07:55.925 [2024-11-15 10:48:15.252833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211774 ] 00:07:55.925 [2024-11-15 10:48:15.340588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.925 [2024-11-15 10:48:15.370262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:55.925 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:56.184 Nvme0n1 00:07:56.184 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:56.445 [ 00:07:56.445 { 00:07:56.445 "name": "Nvme0n1", 00:07:56.445 "aliases": [ 00:07:56.445 "b81c08ca-9ed0-431b-834a-bc0cc467f2dc" 00:07:56.445 ], 00:07:56.445 "product_name": "NVMe disk", 00:07:56.445 "block_size": 4096, 00:07:56.445 "num_blocks": 38912, 00:07:56.445 "uuid": "b81c08ca-9ed0-431b-834a-bc0cc467f2dc", 00:07:56.445 "numa_id": 0, 00:07:56.445 "assigned_rate_limits": { 00:07:56.445 "rw_ios_per_sec": 0, 00:07:56.445 "rw_mbytes_per_sec": 0, 00:07:56.445 "r_mbytes_per_sec": 0, 00:07:56.445 "w_mbytes_per_sec": 0 00:07:56.445 }, 00:07:56.445 "claimed": false, 00:07:56.445 "zoned": false, 00:07:56.445 "supported_io_types": { 00:07:56.445 "read": true, 00:07:56.445 "write": true, 00:07:56.445 "unmap": true, 00:07:56.445 "flush": true, 00:07:56.445 "reset": true, 00:07:56.445 "nvme_admin": true, 00:07:56.445 "nvme_io": true, 00:07:56.445 "nvme_io_md": false, 00:07:56.445 "write_zeroes": true, 00:07:56.445 "zcopy": false, 00:07:56.445 "get_zone_info": false, 00:07:56.445 "zone_management": false, 00:07:56.445 "zone_append": false, 00:07:56.445 "compare": true, 00:07:56.445 "compare_and_write": true, 00:07:56.445 "abort": true, 00:07:56.445 "seek_hole": false, 00:07:56.445 "seek_data": false, 00:07:56.445 "copy": true, 00:07:56.445 "nvme_iov_md": false 00:07:56.445 }, 00:07:56.445 "memory_domains": [ 00:07:56.445 { 00:07:56.445 "dma_device_id": "system", 00:07:56.445 "dma_device_type": 1 00:07:56.445 } 00:07:56.445 ], 00:07:56.445 "driver_specific": { 00:07:56.445 "nvme": [ 00:07:56.445 { 00:07:56.445 "trid": { 00:07:56.445 "trtype": "TCP", 00:07:56.445 "adrfam": "IPv4", 00:07:56.445 "traddr": "10.0.0.2", 00:07:56.445 "trsvcid": "4420", 00:07:56.445 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:56.445 }, 00:07:56.445 "ctrlr_data": { 00:07:56.445 "cntlid": 1, 00:07:56.445 "vendor_id": "0x8086", 00:07:56.445 "model_number": "SPDK bdev Controller", 00:07:56.445 "serial_number": "SPDK0", 00:07:56.445 "firmware_revision": "25.01", 00:07:56.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.445 "oacs": { 00:07:56.445 "security": 0, 00:07:56.445 "format": 0, 00:07:56.445 "firmware": 0, 00:07:56.445 "ns_manage": 0 00:07:56.445 }, 00:07:56.445 "multi_ctrlr": true, 00:07:56.445 "ana_reporting": false 00:07:56.445 }, 00:07:56.445 "vs": { 00:07:56.445 "nvme_version": "1.3" 00:07:56.445 }, 00:07:56.445 "ns_data": { 00:07:56.445 "id": 1, 00:07:56.445 "can_share": true 00:07:56.445 } 00:07:56.445 } 00:07:56.445 ], 00:07:56.445 "mp_policy": "active_passive" 00:07:56.445 } 00:07:56.445 } 00:07:56.445 ] 00:07:56.445 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=211962 00:07:56.445 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:56.445 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:56.445 Running I/O for 10 seconds... 00:07:57.838 Latency(us) 00:07:57.838 [2024-11-15T09:48:17.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.838 Nvme0n1 : 1.00 24919.00 97.34 0.00 0.00 0.00 0.00 0.00 00:07:57.838 [2024-11-15T09:48:17.365Z] =================================================================================================================== 00:07:57.838 [2024-11-15T09:48:17.365Z] Total : 24919.00 97.34 0.00 0.00 0.00 0.00 0.00 00:07:57.838 00:07:58.408 10:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d2c86f26-089b-4448-b312-b0715a7bc007 00:07:58.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.668 Nvme0n1 : 2.00 25130.50 98.17 0.00 0.00 0.00 0.00 0.00 00:07:58.668 [2024-11-15T09:48:18.195Z] =================================================================================================================== 00:07:58.668 [2024-11-15T09:48:18.195Z] Total : 25130.50 98.17 0.00 0.00 0.00 0.00 0.00 00:07:58.668 00:07:58.668 true 00:07:58.668 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:07:58.668 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:58.928 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:58.928 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:58.928 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 211962 00:07:59.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.498 Nvme0n1 : 3.00 25223.33 98.53 0.00 0.00 0.00 0.00 0.00 00:07:59.498 [2024-11-15T09:48:19.025Z] =================================================================================================================== 00:07:59.498 [2024-11-15T09:48:19.025Z] Total : 25223.33 98.53 0.00 0.00 0.00 0.00 0.00 00:07:59.498 00:08:00.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.438 Nvme0n1 : 4.00 25283.75 98.76 0.00 0.00 0.00 0.00 0.00 00:08:00.438 [2024-11-15T09:48:19.965Z] =================================================================================================================== 00:08:00.438 [2024-11-15T09:48:19.965Z] Total : 25283.75 98.76 0.00 0.00 0.00 0.00 0.00 00:08:00.438 00:08:01.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.822 Nvme0n1 : 5.00 25315.80 98.89 0.00 0.00 0.00 0.00 0.00 00:08:01.822 [2024-11-15T09:48:21.349Z] =================================================================================================================== 00:08:01.822 [2024-11-15T09:48:21.349Z] Total : 25315.80 98.89 0.00 0.00 0.00 0.00 0.00 00:08:01.822 00:08:02.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.762 Nvme0n1 : 6.00 25351.50 99.03 0.00 0.00 0.00 0.00 0.00 00:08:02.762 [2024-11-15T09:48:22.289Z] =================================================================================================================== 00:08:02.762 [2024-11-15T09:48:22.289Z] Total : 25351.50 99.03 0.00 0.00 0.00 0.00 0.00 00:08:02.762 00:08:03.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.702 Nvme0n1 : 7.00 25377.43 99.13 0.00 0.00 0.00 0.00 0.00 00:08:03.702 [2024-11-15T09:48:23.229Z] =================================================================================================================== 00:08:03.702 [2024-11-15T09:48:23.229Z] Total : 25377.43 99.13 0.00 0.00 0.00 0.00 0.00 00:08:03.702 00:08:04.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.652 Nvme0n1 : 8.00 25405.00 99.24 0.00 0.00 0.00 0.00 0.00 00:08:04.652 [2024-11-15T09:48:24.179Z] =================================================================================================================== 00:08:04.652 [2024-11-15T09:48:24.179Z] Total : 25405.00 99.24 0.00 0.00 0.00 0.00 0.00 00:08:04.652 00:08:05.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.594 Nvme0n1 : 9.00 25419.44 99.29 0.00 0.00 0.00 0.00 0.00 00:08:05.594 [2024-11-15T09:48:25.121Z] =================================================================================================================== 00:08:05.594 [2024-11-15T09:48:25.121Z] Total : 25419.44 99.29 0.00 0.00 0.00 0.00 0.00 00:08:05.594 00:08:06.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.536 Nvme0n1 : 10.00 25430.80 99.34 0.00 0.00 0.00 0.00 0.00 00:08:06.536 [2024-11-15T09:48:26.063Z] =================================================================================================================== 00:08:06.536 [2024-11-15T09:48:26.063Z] Total : 25430.80 99.34 0.00 0.00 0.00 0.00 0.00 00:08:06.536 00:08:06.536 00:08:06.536 Latency(us) 00:08:06.536 [2024-11-15T09:48:26.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.536 Nvme0n1 : 10.00 25432.01 99.34 0.00 0.00 5030.02 3085.65 16056.32 00:08:06.536 [2024-11-15T09:48:26.063Z] =================================================================================================================== 00:08:06.536 [2024-11-15T09:48:26.063Z] Total : 25432.01 99.34 0.00 0.00 5030.02 3085.65 16056.32 00:08:06.536 { 00:08:06.536 "results": [ 00:08:06.536 { 00:08:06.536 "job": "Nvme0n1", 00:08:06.536 "core_mask": "0x2", 00:08:06.536 "workload": "randwrite", 00:08:06.536 "status": "finished", 00:08:06.536 "queue_depth": 128, 00:08:06.536 "io_size": 4096, 00:08:06.536 "runtime": 10.004558, 00:08:06.536 "iops": 25432.008090712254, 00:08:06.536 "mibps": 99.34378160434474, 00:08:06.536 "io_failed": 0, 00:08:06.536 "io_timeout": 0, 00:08:06.536 "avg_latency_us": 5030.020575285468, 00:08:06.537 "min_latency_us": 3085.653333333333, 00:08:06.537 "max_latency_us": 16056.32 00:08:06.537 } 00:08:06.537 ], 00:08:06.537 "core_count": 1 00:08:06.537 } 00:08:06.537 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 211774 00:08:06.537 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 211774 ']' 00:08:06.537 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 211774 00:08:06.537 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:06.537 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.537 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 211774 00:08:06.797 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:06.797 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:06.797 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 211774' 00:08:06.797 killing process with pid 211774 00:08:06.797 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 211774 00:08:06.797 Received shutdown signal, test time was about 10.000000 seconds 00:08:06.797 00:08:06.797 Latency(us) 00:08:06.797 [2024-11-15T09:48:26.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.797 [2024-11-15T09:48:26.325Z] =================================================================================================================== 00:08:06.798 [2024-11-15T09:48:26.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:06.798 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 211774 00:08:06.798 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.058 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.058 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:07.058 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 207401 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 207401 00:08:07.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 207401 Killed "${NVMF_APP[@]}" "$@" 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=214140 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 214140 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 214140 ']' 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.319 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.319 [2024-11-15 10:48:26.771164] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:07.319 [2024-11-15 10:48:26.771245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.580 [2024-11-15 10:48:26.863602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.580 [2024-11-15 10:48:26.894026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.580 [2024-11-15 10:48:26.894053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.580 [2024-11-15 10:48:26.894059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.580 [2024-11-15 10:48:26.894064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.580 [2024-11-15 10:48:26.894067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.580 [2024-11-15 10:48:26.894518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.150 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.411 [2024-11-15 10:48:27.745213] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:08.411 [2024-11-15 10:48:27.745288] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:08.411 [2024-11-15 10:48:27.745310] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.411 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b81c08ca-9ed0-431b-834a-bc0cc467f2dc -t 2000 00:08:08.671 [ 00:08:08.671 { 00:08:08.671 "name": "b81c08ca-9ed0-431b-834a-bc0cc467f2dc", 00:08:08.671 "aliases": [ 00:08:08.671 "lvs/lvol" 00:08:08.671 ], 00:08:08.671 "product_name": "Logical Volume", 00:08:08.671 "block_size": 4096, 00:08:08.671 "num_blocks": 38912, 00:08:08.671 "uuid": "b81c08ca-9ed0-431b-834a-bc0cc467f2dc", 00:08:08.671 "assigned_rate_limits": { 00:08:08.671 "rw_ios_per_sec": 0, 00:08:08.671 "rw_mbytes_per_sec": 0, 00:08:08.671 "r_mbytes_per_sec": 0, 00:08:08.671 "w_mbytes_per_sec": 0 00:08:08.671 }, 00:08:08.671 "claimed": false, 00:08:08.671 "zoned": false, 00:08:08.671 "supported_io_types": { 00:08:08.671 "read": true, 00:08:08.671 "write": true, 00:08:08.671 "unmap": true, 00:08:08.671 "flush": false, 00:08:08.671 "reset": true, 00:08:08.671 "nvme_admin": false, 00:08:08.671 "nvme_io": false, 00:08:08.671 "nvme_io_md": false, 00:08:08.671 "write_zeroes": true, 00:08:08.671 "zcopy": false, 00:08:08.671 "get_zone_info": false, 00:08:08.671 "zone_management": false, 00:08:08.671 "zone_append": false, 00:08:08.671 "compare": false, 00:08:08.671 "compare_and_write": false, 00:08:08.671 "abort": false, 00:08:08.671 "seek_hole": true, 00:08:08.671 "seek_data": true, 00:08:08.671 "copy": false, 00:08:08.671 "nvme_iov_md": false 00:08:08.671 }, 00:08:08.671 "driver_specific": { 00:08:08.671 "lvol": { 00:08:08.671 "lvol_store_uuid": "d2c86f26-089b-4448-b312-b0715a7bc007", 00:08:08.671 "base_bdev": "aio_bdev", 00:08:08.671 "thin_provision": false, 00:08:08.671 "num_allocated_clusters": 38, 00:08:08.671 "snapshot": false, 00:08:08.671 "clone": false, 00:08:08.671 "esnap_clone": false 00:08:08.671 } 00:08:08.671 } 00:08:08.671 } 00:08:08.671 ] 00:08:08.671 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:08.671 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:08.671 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:08.932 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:08.932 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:08.932 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:08.932 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:08.932 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.192 [2024-11-15 10:48:28.565792] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.192 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.193 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.193 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.193 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:09.193 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:09.453 request: 00:08:09.453 { 00:08:09.453 "uuid": "d2c86f26-089b-4448-b312-b0715a7bc007", 00:08:09.453 "method": "bdev_lvol_get_lvstores", 00:08:09.453 "req_id": 1 00:08:09.453 } 00:08:09.453 Got JSON-RPC error response 00:08:09.453 response: 00:08:09.453 { 00:08:09.453 "code": -19, 00:08:09.453 "message": "No such device" 00:08:09.453 } 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.453 aio_bdev 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.453 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.713 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b81c08ca-9ed0-431b-834a-bc0cc467f2dc -t 2000 00:08:09.974 [ 00:08:09.974 { 00:08:09.974 "name": "b81c08ca-9ed0-431b-834a-bc0cc467f2dc", 00:08:09.974 "aliases": [ 00:08:09.974 "lvs/lvol" 00:08:09.974 ], 00:08:09.974 "product_name": "Logical Volume", 00:08:09.974 "block_size": 4096, 00:08:09.974 "num_blocks": 38912, 00:08:09.974 "uuid": "b81c08ca-9ed0-431b-834a-bc0cc467f2dc", 00:08:09.974 "assigned_rate_limits": { 00:08:09.974 "rw_ios_per_sec": 0, 00:08:09.974 "rw_mbytes_per_sec": 0, 00:08:09.974 "r_mbytes_per_sec": 0, 00:08:09.974 "w_mbytes_per_sec": 0 00:08:09.974 }, 00:08:09.974 "claimed": false, 00:08:09.974 "zoned": false, 00:08:09.974 "supported_io_types": { 00:08:09.974 "read": true, 00:08:09.974 "write": true, 00:08:09.974 "unmap": true, 00:08:09.974 "flush": false, 00:08:09.974 "reset": true, 00:08:09.974 "nvme_admin": false, 00:08:09.974 "nvme_io": false, 00:08:09.974 "nvme_io_md": false, 00:08:09.974 "write_zeroes": true, 00:08:09.974 "zcopy": false, 00:08:09.974 "get_zone_info": false, 00:08:09.974 "zone_management": false, 00:08:09.974 "zone_append": false, 00:08:09.974 "compare": false, 00:08:09.974 "compare_and_write": false, 00:08:09.974 "abort": false, 00:08:09.974 "seek_hole": true, 00:08:09.974 "seek_data": true, 00:08:09.974 "copy": false, 00:08:09.974 "nvme_iov_md": false 00:08:09.974 }, 00:08:09.974 "driver_specific": { 00:08:09.974 "lvol": { 00:08:09.974 "lvol_store_uuid": "d2c86f26-089b-4448-b312-b0715a7bc007", 00:08:09.974 "base_bdev": "aio_bdev", 00:08:09.974 "thin_provision": false, 00:08:09.974 "num_allocated_clusters": 38, 00:08:09.974 "snapshot": false, 00:08:09.974 "clone": false, 00:08:09.974 "esnap_clone": false 00:08:09.974 } 00:08:09.974 } 00:08:09.974 } 00:08:09.974 ] 00:08:09.974 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:09.974 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:09.974 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:09.974 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:09.974 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:09.974 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:10.234 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.234 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b81c08ca-9ed0-431b-834a-bc0cc467f2dc 00:08:10.495 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2c86f26-089b-4448-b312-b0715a7bc007 00:08:10.495 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.755 00:08:10.755 real 0m16.684s 00:08:10.755 user 0m44.068s 00:08:10.755 sys 0m3.030s 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.755 ************************************ 00:08:10.755 END TEST lvs_grow_dirty 00:08:10.755 ************************************ 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:10.755 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:10.756 nvmf_trace.0 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.756 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.016 rmmod nvme_tcp 00:08:11.016 rmmod nvme_fabrics 00:08:11.016 rmmod nvme_keyring 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 214140 ']' 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 214140 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 214140 ']' 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 214140 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 214140 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 214140' 00:08:11.016 killing process with pid 214140 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 214140 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 214140 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.016 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.559 00:08:13.559 real 0m43.865s 00:08:13.559 user 1m5.983s 00:08:13.559 sys 0m10.536s 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 ************************************ 00:08:13.559 END TEST nvmf_lvs_grow 00:08:13.559 ************************************ 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 ************************************ 00:08:13.559 START TEST nvmf_bdev_io_wait 00:08:13.559 ************************************ 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:13.559 * Looking for test storage... 00:08:13.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.559 --rc genhtml_branch_coverage=1 00:08:13.559 --rc genhtml_function_coverage=1 00:08:13.559 --rc genhtml_legend=1 00:08:13.559 --rc geninfo_all_blocks=1 00:08:13.559 --rc geninfo_unexecuted_blocks=1 00:08:13.559 00:08:13.559 ' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.559 --rc genhtml_branch_coverage=1 00:08:13.559 --rc genhtml_function_coverage=1 00:08:13.559 --rc genhtml_legend=1 00:08:13.559 --rc geninfo_all_blocks=1 00:08:13.559 --rc geninfo_unexecuted_blocks=1 00:08:13.559 00:08:13.559 ' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.559 --rc genhtml_branch_coverage=1 00:08:13.559 --rc genhtml_function_coverage=1 00:08:13.559 --rc genhtml_legend=1 00:08:13.559 --rc geninfo_all_blocks=1 00:08:13.559 --rc geninfo_unexecuted_blocks=1 00:08:13.559 00:08:13.559 ' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.559 --rc genhtml_branch_coverage=1 00:08:13.559 --rc genhtml_function_coverage=1 00:08:13.559 --rc genhtml_legend=1 00:08:13.559 --rc geninfo_all_blocks=1 00:08:13.559 --rc geninfo_unexecuted_blocks=1 00:08:13.559 00:08:13.559 ' 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:13.559 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.560 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:21.697 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:21.697 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.697 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:21.698 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:21.698 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:08:21.698 00:08:21.698 --- 10.0.0.2 ping statistics --- 00:08:21.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.698 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:08:21.698 00:08:21.698 --- 10.0.0.1 ping statistics --- 00:08:21.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.698 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=219211 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 219211 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 219211 ']' 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.698 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.698 [2024-11-15 10:48:40.518236] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:21.698 [2024-11-15 10:48:40.518301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.698 [2024-11-15 10:48:40.619080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.698 [2024-11-15 10:48:40.673264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.698 [2024-11-15 10:48:40.673315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.698 [2024-11-15 10:48:40.673324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.698 [2024-11-15 10:48:40.673332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.698 [2024-11-15 10:48:40.673338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.698 [2024-11-15 10:48:40.675454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.698 [2024-11-15 10:48:40.675663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.698 [2024-11-15 10:48:40.675960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.698 [2024-11-15 10:48:40.675962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 [2024-11-15 10:48:41.475650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.959 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.221 Malloc0 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.221 [2024-11-15 10:48:41.541437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=219325 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=219328 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.221 { 00:08:22.221 "params": { 00:08:22.221 "name": "Nvme$subsystem", 00:08:22.221 "trtype": "$TEST_TRANSPORT", 00:08:22.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.221 "adrfam": "ipv4", 00:08:22.221 "trsvcid": "$NVMF_PORT", 00:08:22.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.221 "hdgst": ${hdgst:-false}, 00:08:22.221 "ddgst": ${ddgst:-false} 00:08:22.221 }, 00:08:22.221 "method": "bdev_nvme_attach_controller" 00:08:22.221 } 00:08:22.221 EOF 00:08:22.221 )") 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=219330 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.221 { 00:08:22.221 "params": { 00:08:22.221 "name": "Nvme$subsystem", 00:08:22.221 "trtype": "$TEST_TRANSPORT", 00:08:22.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.221 "adrfam": "ipv4", 00:08:22.221 "trsvcid": "$NVMF_PORT", 00:08:22.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.221 "hdgst": ${hdgst:-false}, 00:08:22.221 "ddgst": ${ddgst:-false} 00:08:22.221 }, 00:08:22.221 "method": "bdev_nvme_attach_controller" 00:08:22.221 } 00:08:22.221 EOF 00:08:22.221 )") 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=219334 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.221 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.221 { 00:08:22.221 "params": { 00:08:22.221 "name": "Nvme$subsystem", 00:08:22.221 "trtype": "$TEST_TRANSPORT", 00:08:22.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.221 "adrfam": "ipv4", 00:08:22.221 "trsvcid": "$NVMF_PORT", 00:08:22.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.221 "hdgst": ${hdgst:-false}, 00:08:22.221 "ddgst": ${ddgst:-false} 00:08:22.222 }, 00:08:22.222 "method": "bdev_nvme_attach_controller" 00:08:22.222 } 00:08:22.222 EOF 00:08:22.222 )") 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.222 { 00:08:22.222 "params": { 00:08:22.222 "name": "Nvme$subsystem", 00:08:22.222 "trtype": "$TEST_TRANSPORT", 00:08:22.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.222 "adrfam": "ipv4", 00:08:22.222 "trsvcid": "$NVMF_PORT", 00:08:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.222 "hdgst": ${hdgst:-false}, 00:08:22.222 "ddgst": ${ddgst:-false} 00:08:22.222 }, 00:08:22.222 "method": "bdev_nvme_attach_controller" 00:08:22.222 } 00:08:22.222 EOF 00:08:22.222 )") 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 219325 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.222 "params": { 00:08:22.222 "name": "Nvme1", 00:08:22.222 "trtype": "tcp", 00:08:22.222 "traddr": "10.0.0.2", 00:08:22.222 "adrfam": "ipv4", 00:08:22.222 "trsvcid": "4420", 00:08:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.222 "hdgst": false, 00:08:22.222 "ddgst": false 00:08:22.222 }, 00:08:22.222 "method": "bdev_nvme_attach_controller" 00:08:22.222 }' 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.222 "params": { 00:08:22.222 "name": "Nvme1", 00:08:22.222 "trtype": "tcp", 00:08:22.222 "traddr": "10.0.0.2", 00:08:22.222 "adrfam": "ipv4", 00:08:22.222 "trsvcid": "4420", 00:08:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.222 "hdgst": false, 00:08:22.222 "ddgst": false 00:08:22.222 }, 00:08:22.222 "method": "bdev_nvme_attach_controller" 00:08:22.222 }' 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.222 "params": { 00:08:22.222 "name": "Nvme1", 00:08:22.222 "trtype": "tcp", 00:08:22.222 "traddr": "10.0.0.2", 00:08:22.222 "adrfam": "ipv4", 00:08:22.222 "trsvcid": "4420", 00:08:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.222 "hdgst": false, 00:08:22.222 "ddgst": false 00:08:22.222 }, 00:08:22.222 "method": "bdev_nvme_attach_controller" 00:08:22.222 }' 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.222 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.222 "params": { 00:08:22.222 "name": "Nvme1", 00:08:22.222 "trtype": "tcp", 00:08:22.222 "traddr": "10.0.0.2", 00:08:22.222 "adrfam": "ipv4", 00:08:22.222 "trsvcid": "4420", 00:08:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.222 "hdgst": false, 00:08:22.222 "ddgst": false 00:08:22.222 }, 00:08:22.222 "method": "bdev_nvme_attach_controller" 00:08:22.222 }' 00:08:22.222 [2024-11-15 10:48:41.599479] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:22.222 [2024-11-15 10:48:41.599558] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:22.222 [2024-11-15 10:48:41.601862] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:22.222 [2024-11-15 10:48:41.601931] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:22.222 [2024-11-15 10:48:41.604372] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:22.222 [2024-11-15 10:48:41.604438] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:22.222 [2024-11-15 10:48:41.606123] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:22.222 [2024-11-15 10:48:41.606189] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:22.483 [2024-11-15 10:48:41.820788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.483 [2024-11-15 10:48:41.861500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.483 [2024-11-15 10:48:41.914827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.483 [2024-11-15 10:48:41.953897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:22.483 [2024-11-15 10:48:41.978359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.744 [2024-11-15 10:48:42.017500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:22.744 [2024-11-15 10:48:42.050303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.744 [2024-11-15 10:48:42.090021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:22.744 Running I/O for 1 seconds... 00:08:22.744 Running I/O for 1 seconds... 00:08:22.744 Running I/O for 1 seconds... 00:08:22.744 Running I/O for 1 seconds... 00:08:23.686 12059.00 IOPS, 47.11 MiB/s 00:08:23.686 Latency(us) 00:08:23.686 [2024-11-15T09:48:43.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.686 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:23.686 Nvme1n1 : 1.01 12111.92 47.31 0.00 0.00 10529.76 5707.09 19005.44 00:08:23.686 [2024-11-15T09:48:43.213Z] =================================================================================================================== 00:08:23.686 [2024-11-15T09:48:43.213Z] Total : 12111.92 47.31 0.00 0.00 10529.76 5707.09 19005.44 00:08:23.686 5962.00 IOPS, 23.29 MiB/s 00:08:23.686 Latency(us) 00:08:23.686 [2024-11-15T09:48:43.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.686 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:23.686 Nvme1n1 : 1.02 5992.00 23.41 0.00 0.00 21108.42 11359.57 33860.27 00:08:23.686 [2024-11-15T09:48:43.213Z] =================================================================================================================== 00:08:23.686 [2024-11-15T09:48:43.213Z] Total : 5992.00 23.41 0.00 0.00 21108.42 11359.57 33860.27 00:08:23.686 187648.00 IOPS, 733.00 MiB/s 00:08:23.686 Latency(us) 00:08:23.686 [2024-11-15T09:48:43.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.686 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:23.686 Nvme1n1 : 1.00 187263.12 731.50 0.00 0.00 679.57 314.03 2048.00 00:08:23.686 [2024-11-15T09:48:43.213Z] =================================================================================================================== 00:08:23.686 [2024-11-15T09:48:43.213Z] Total : 187263.12 731.50 0.00 0.00 679.57 314.03 2048.00 00:08:23.948 6281.00 IOPS, 24.54 MiB/s 00:08:23.948 Latency(us) 00:08:23.948 [2024-11-15T09:48:43.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.948 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:23.948 Nvme1n1 : 1.01 6399.21 25.00 0.00 0.00 19938.10 4532.91 46530.56 00:08:23.948 [2024-11-15T09:48:43.475Z] =================================================================================================================== 00:08:23.948 [2024-11-15T09:48:43.475Z] Total : 6399.21 25.00 0.00 0.00 19938.10 4532.91 46530.56 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 219328 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 219330 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 219334 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.948 rmmod nvme_tcp 00:08:23.948 rmmod nvme_fabrics 00:08:23.948 rmmod nvme_keyring 00:08:23.948 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 219211 ']' 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 219211 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 219211 ']' 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 219211 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 219211 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 219211' 00:08:24.208 killing process with pid 219211 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 219211 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 219211 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.208 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.209 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.755 00:08:26.755 real 0m13.066s 00:08:26.755 user 0m19.304s 00:08:26.755 sys 0m7.446s 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.755 ************************************ 00:08:26.755 END TEST nvmf_bdev_io_wait 00:08:26.755 ************************************ 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.755 ************************************ 00:08:26.755 START TEST nvmf_queue_depth 00:08:26.755 ************************************ 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:26.755 * Looking for test storage... 00:08:26.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:26.755 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:26.755 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:26.755 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.755 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.755 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.755 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.755 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:26.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.756 --rc genhtml_branch_coverage=1 00:08:26.756 --rc genhtml_function_coverage=1 00:08:26.756 --rc genhtml_legend=1 00:08:26.756 --rc geninfo_all_blocks=1 00:08:26.756 --rc geninfo_unexecuted_blocks=1 00:08:26.756 00:08:26.756 ' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:26.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.756 --rc genhtml_branch_coverage=1 00:08:26.756 --rc genhtml_function_coverage=1 00:08:26.756 --rc genhtml_legend=1 00:08:26.756 --rc geninfo_all_blocks=1 00:08:26.756 --rc geninfo_unexecuted_blocks=1 00:08:26.756 00:08:26.756 ' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:26.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.756 --rc genhtml_branch_coverage=1 00:08:26.756 --rc genhtml_function_coverage=1 00:08:26.756 --rc genhtml_legend=1 00:08:26.756 --rc geninfo_all_blocks=1 00:08:26.756 --rc geninfo_unexecuted_blocks=1 00:08:26.756 00:08:26.756 ' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:26.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.756 --rc genhtml_branch_coverage=1 00:08:26.756 --rc genhtml_function_coverage=1 00:08:26.756 --rc genhtml_legend=1 00:08:26.756 --rc geninfo_all_blocks=1 00:08:26.756 --rc geninfo_unexecuted_blocks=1 00:08:26.756 00:08:26.756 ' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.756 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.757 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:34.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:34.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:34.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.974 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:34.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:08:34.975 00:08:34.975 --- 10.0.0.2 ping statistics --- 00:08:34.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.975 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:08:34.975 00:08:34.975 --- 10.0.0.1 ping statistics --- 00:08:34.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.975 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=223966 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 223966 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 223966 ']' 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.975 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.975 [2024-11-15 10:48:53.721231] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:34.975 [2024-11-15 10:48:53.721300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.975 [2024-11-15 10:48:53.823653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.975 [2024-11-15 10:48:53.874444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.975 [2024-11-15 10:48:53.874497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.975 [2024-11-15 10:48:53.874506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.975 [2024-11-15 10:48:53.874512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.975 [2024-11-15 10:48:53.874518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.975 [2024-11-15 10:48:53.875325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 [2024-11-15 10:48:54.580849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 Malloc0 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 [2024-11-15 10:48:54.642021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=224299 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 224299 /var/tmp/bdevperf.sock 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 224299 ']' 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.308 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.308 [2024-11-15 10:48:54.700333] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:08:35.309 [2024-11-15 10:48:54.700399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224299 ] 00:08:35.309 [2024-11-15 10:48:54.794088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.570 [2024-11-15 10:48:54.847309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.142 NVMe0n1 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.142 10:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.403 Running I/O for 10 seconds... 00:08:38.288 8206.00 IOPS, 32.05 MiB/s [2024-11-15T09:48:58.756Z] 9216.00 IOPS, 36.00 MiB/s [2024-11-15T09:49:00.138Z] 9948.67 IOPS, 38.86 MiB/s [2024-11-15T09:49:00.709Z] 10583.75 IOPS, 41.34 MiB/s [2024-11-15T09:49:02.090Z] 11060.00 IOPS, 43.20 MiB/s [2024-11-15T09:49:03.027Z] 11436.67 IOPS, 44.67 MiB/s [2024-11-15T09:49:03.965Z] 11717.71 IOPS, 45.77 MiB/s [2024-11-15T09:49:04.902Z] 11947.25 IOPS, 46.67 MiB/s [2024-11-15T09:49:05.843Z] 12137.67 IOPS, 47.41 MiB/s [2024-11-15T09:49:05.843Z] 12270.80 IOPS, 47.93 MiB/s 00:08:46.316 Latency(us) 00:08:46.316 [2024-11-15T09:49:05.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.316 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:46.316 Verification LBA range: start 0x0 length 0x4000 00:08:46.316 NVMe0n1 : 10.11 12241.50 47.82 0.00 0.00 83010.06 25231.36 75147.95 00:08:46.316 [2024-11-15T09:49:05.843Z] =================================================================================================================== 00:08:46.316 [2024-11-15T09:49:05.843Z] Total : 12241.50 47.82 0.00 0.00 83010.06 25231.36 75147.95 00:08:46.316 { 00:08:46.316 "results": [ 00:08:46.316 { 00:08:46.316 "job": "NVMe0n1", 00:08:46.316 "core_mask": "0x1", 00:08:46.316 "workload": "verify", 00:08:46.316 "status": "finished", 00:08:46.316 "verify_range": { 00:08:46.316 "start": 0, 00:08:46.316 "length": 16384 00:08:46.316 }, 00:08:46.316 "queue_depth": 1024, 00:08:46.316 "io_size": 4096, 00:08:46.316 "runtime": 10.105544, 00:08:46.316 "iops": 12241.498330025577, 00:08:46.316 "mibps": 47.81835285166241, 00:08:46.316 "io_failed": 0, 00:08:46.316 "io_timeout": 0, 00:08:46.316 "avg_latency_us": 83010.06371506867, 00:08:46.316 "min_latency_us": 25231.36, 00:08:46.316 "max_latency_us": 75147.94666666667 00:08:46.316 } 00:08:46.316 ], 00:08:46.316 "core_count": 1 00:08:46.316 } 00:08:46.316 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 224299 00:08:46.316 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 224299 ']' 00:08:46.316 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 224299 00:08:46.316 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 224299 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 224299' 00:08:46.576 killing process with pid 224299 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 224299 00:08:46.576 Received shutdown signal, test time was about 10.000000 seconds 00:08:46.576 00:08:46.576 Latency(us) 00:08:46.576 [2024-11-15T09:49:06.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.576 [2024-11-15T09:49:06.103Z] =================================================================================================================== 00:08:46.576 [2024-11-15T09:49:06.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:46.576 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 224299 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.576 rmmod nvme_tcp 00:08:46.576 rmmod nvme_fabrics 00:08:46.576 rmmod nvme_keyring 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 223966 ']' 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 223966 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 223966 ']' 00:08:46.576 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 223966 00:08:46.577 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:46.577 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.577 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 223966 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 223966' 00:08:46.837 killing process with pid 223966 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 223966 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 223966 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.837 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.377 00:08:49.377 real 0m22.512s 00:08:49.377 user 0m25.674s 00:08:49.377 sys 0m7.109s 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.377 ************************************ 00:08:49.377 END TEST nvmf_queue_depth 00:08:49.377 ************************************ 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.377 ************************************ 00:08:49.377 START TEST nvmf_target_multipath 00:08:49.377 ************************************ 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.377 * Looking for test storage... 00:08:49.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.377 --rc genhtml_branch_coverage=1 00:08:49.377 --rc genhtml_function_coverage=1 00:08:49.377 --rc genhtml_legend=1 00:08:49.377 --rc geninfo_all_blocks=1 00:08:49.377 --rc geninfo_unexecuted_blocks=1 00:08:49.377 00:08:49.377 ' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.377 --rc genhtml_branch_coverage=1 00:08:49.377 --rc genhtml_function_coverage=1 00:08:49.377 --rc genhtml_legend=1 00:08:49.377 --rc geninfo_all_blocks=1 00:08:49.377 --rc geninfo_unexecuted_blocks=1 00:08:49.377 00:08:49.377 ' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.377 --rc genhtml_branch_coverage=1 00:08:49.377 --rc genhtml_function_coverage=1 00:08:49.377 --rc genhtml_legend=1 00:08:49.377 --rc geninfo_all_blocks=1 00:08:49.377 --rc geninfo_unexecuted_blocks=1 00:08:49.377 00:08:49.377 ' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.377 --rc genhtml_branch_coverage=1 00:08:49.377 --rc genhtml_function_coverage=1 00:08:49.377 --rc genhtml_legend=1 00:08:49.377 --rc geninfo_all_blocks=1 00:08:49.377 --rc geninfo_unexecuted_blocks=1 00:08:49.377 00:08:49.377 ' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.377 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.378 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:57.512 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:57.512 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:57.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.512 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:57.513 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.513 10:49:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:08:57.513 00:08:57.513 --- 10.0.0.2 ping statistics --- 00:08:57.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.513 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:57.513 00:08:57.513 --- 10.0.0.1 ping statistics --- 00:08:57.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.513 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:57.513 only one NIC for nvmf test 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.513 rmmod nvme_tcp 00:08:57.513 rmmod nvme_fabrics 00:08:57.513 rmmod nvme_keyring 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.513 10:49:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:58.895 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.896 00:08:58.896 real 0m9.996s 00:08:58.896 user 0m2.216s 00:08:58.896 sys 0m5.710s 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.896 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:58.896 ************************************ 00:08:58.896 END TEST nvmf_target_multipath 00:08:58.896 ************************************ 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.156 ************************************ 00:08:59.156 START TEST nvmf_zcopy 00:08:59.156 ************************************ 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:59.156 * Looking for test storage... 00:08:59.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.156 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.418 --rc genhtml_branch_coverage=1 00:08:59.418 --rc genhtml_function_coverage=1 00:08:59.418 --rc genhtml_legend=1 00:08:59.418 --rc geninfo_all_blocks=1 00:08:59.418 --rc geninfo_unexecuted_blocks=1 00:08:59.418 00:08:59.418 ' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.418 --rc genhtml_branch_coverage=1 00:08:59.418 --rc genhtml_function_coverage=1 00:08:59.418 --rc genhtml_legend=1 00:08:59.418 --rc geninfo_all_blocks=1 00:08:59.418 --rc geninfo_unexecuted_blocks=1 00:08:59.418 00:08:59.418 ' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.418 --rc genhtml_branch_coverage=1 00:08:59.418 --rc genhtml_function_coverage=1 00:08:59.418 --rc genhtml_legend=1 00:08:59.418 --rc geninfo_all_blocks=1 00:08:59.418 --rc geninfo_unexecuted_blocks=1 00:08:59.418 00:08:59.418 ' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.418 --rc genhtml_branch_coverage=1 00:08:59.418 --rc genhtml_function_coverage=1 00:08:59.418 --rc genhtml_legend=1 00:08:59.418 --rc geninfo_all_blocks=1 00:08:59.418 --rc geninfo_unexecuted_blocks=1 00:08:59.418 00:08:59.418 ' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.418 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.419 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.555 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:07.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:07.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:07.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:07.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.556 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:09:07.556 00:09:07.556 --- 10.0.0.2 ping statistics --- 00:09:07.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.556 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:07.556 00:09:07.556 --- 10.0.0.1 ping statistics --- 00:09:07.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.556 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=235000 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 235000 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 235000 ']' 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.556 10:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.556 [2024-11-15 10:49:26.325736] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:09:07.556 [2024-11-15 10:49:26.325808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.556 [2024-11-15 10:49:26.426538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.556 [2024-11-15 10:49:26.476492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.556 [2024-11-15 10:49:26.476545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.556 [2024-11-15 10:49:26.476553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.557 [2024-11-15 10:49:26.476571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.557 [2024-11-15 10:49:26.476583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.557 [2024-11-15 10:49:26.477345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 [2024-11-15 10:49:27.181706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 [2024-11-15 10:49:27.205992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 malloc0 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.817 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.817 { 00:09:07.817 "params": { 00:09:07.817 "name": "Nvme$subsystem", 00:09:07.817 "trtype": "$TEST_TRANSPORT", 00:09:07.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.817 "adrfam": "ipv4", 00:09:07.817 "trsvcid": "$NVMF_PORT", 00:09:07.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.817 "hdgst": ${hdgst:-false}, 00:09:07.817 "ddgst": ${ddgst:-false} 00:09:07.817 }, 00:09:07.817 "method": "bdev_nvme_attach_controller" 00:09:07.817 } 00:09:07.817 EOF 00:09:07.817 )") 00:09:07.818 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:07.818 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:07.818 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:07.818 10:49:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.818 "params": { 00:09:07.818 "name": "Nvme1", 00:09:07.818 "trtype": "tcp", 00:09:07.818 "traddr": "10.0.0.2", 00:09:07.818 "adrfam": "ipv4", 00:09:07.818 "trsvcid": "4420", 00:09:07.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.818 "hdgst": false, 00:09:07.818 "ddgst": false 00:09:07.818 }, 00:09:07.818 "method": "bdev_nvme_attach_controller" 00:09:07.818 }' 00:09:07.818 [2024-11-15 10:49:27.307544] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:09:07.818 [2024-11-15 10:49:27.307614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235283 ] 00:09:08.078 [2024-11-15 10:49:27.399050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.078 [2024-11-15 10:49:27.451375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.338 Running I/O for 10 seconds... 00:09:10.219 6411.00 IOPS, 50.09 MiB/s [2024-11-15T09:49:30.688Z] 6462.50 IOPS, 50.49 MiB/s [2024-11-15T09:49:32.071Z] 6476.67 IOPS, 50.60 MiB/s [2024-11-15T09:49:33.012Z] 6489.25 IOPS, 50.70 MiB/s [2024-11-15T09:49:33.960Z] 6494.20 IOPS, 50.74 MiB/s [2024-11-15T09:49:34.899Z] 6689.33 IOPS, 52.26 MiB/s [2024-11-15T09:49:35.840Z] 7117.43 IOPS, 55.60 MiB/s [2024-11-15T09:49:36.779Z] 7439.75 IOPS, 58.12 MiB/s [2024-11-15T09:49:37.719Z] 7691.56 IOPS, 60.09 MiB/s [2024-11-15T09:49:37.719Z] 7893.30 IOPS, 61.67 MiB/s 00:09:18.192 Latency(us) 00:09:18.192 [2024-11-15T09:49:37.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:18.192 Verification LBA range: start 0x0 length 0x1000 00:09:18.192 Nvme1n1 : 10.01 7897.11 61.70 0.00 0.00 16157.76 2430.29 27197.44 00:09:18.192 [2024-11-15T09:49:37.719Z] =================================================================================================================== 00:09:18.192 [2024-11-15T09:49:37.719Z] Total : 7897.11 61.70 0.00 0.00 16157.76 2430.29 27197.44 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=237364 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.452 { 00:09:18.452 "params": { 00:09:18.452 "name": "Nvme$subsystem", 00:09:18.452 "trtype": "$TEST_TRANSPORT", 00:09:18.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.452 "adrfam": "ipv4", 00:09:18.452 "trsvcid": "$NVMF_PORT", 00:09:18.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.452 "hdgst": ${hdgst:-false}, 00:09:18.452 "ddgst": ${ddgst:-false} 00:09:18.452 }, 00:09:18.452 "method": "bdev_nvme_attach_controller" 00:09:18.452 } 00:09:18.452 EOF 00:09:18.452 )") 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:18.452 [2024-11-15 10:49:37.809047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.809074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:18.452 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.452 "params": { 00:09:18.452 "name": "Nvme1", 00:09:18.452 "trtype": "tcp", 00:09:18.452 "traddr": "10.0.0.2", 00:09:18.452 "adrfam": "ipv4", 00:09:18.452 "trsvcid": "4420", 00:09:18.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.452 "hdgst": false, 00:09:18.452 "ddgst": false 00:09:18.452 }, 00:09:18.452 "method": "bdev_nvme_attach_controller" 00:09:18.452 }' 00:09:18.452 [2024-11-15 10:49:37.821048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.821057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.833076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.833085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.845106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.845115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.851921] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:09:18.452 [2024-11-15 10:49:37.851969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237364 ] 00:09:18.452 [2024-11-15 10:49:37.857136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.857144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.869165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.869174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.881196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.881203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.893227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.893234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.905257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.905264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.917288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.917295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.929318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.929326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.933332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.452 [2024-11-15 10:49:37.941351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.941360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.953380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.953393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.452 [2024-11-15 10:49:37.962438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.452 [2024-11-15 10:49:37.965409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.452 [2024-11-15 10:49:37.965417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.453 [2024-11-15 10:49:37.977449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.453 [2024-11-15 10:49:37.977458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:37.989475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:37.989488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.001503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.001515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.013533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.013543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.025568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.025576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.037802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.037819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.049828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.049840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.061862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.061874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.073894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.073905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.085921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.085929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.097954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.097961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.109987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.109995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.122024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.122036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.134052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.134060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.146084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.146092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.158116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.158125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.170148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.170162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.182180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.182189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.194213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.194220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.206245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.206254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 [2024-11-15 10:49:38.218285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.218299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.713 Running I/O for 5 seconds... 00:09:18.713 [2024-11-15 10:49:38.230310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.713 [2024-11-15 10:49:38.230318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.244978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.244995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.258654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.258670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.271666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.271681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.284162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.284177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.297240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.297255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.310149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.310165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.322728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.322742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.335648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.335664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.349284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.349299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.362501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.362516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.375956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.375971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.389473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.389487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.402663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.402678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.415844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.415864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.428528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.428543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.441656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.441671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.454737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.454752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.468125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.468140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.481539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.481554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.973 [2024-11-15 10:49:38.495156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.973 [2024-11-15 10:49:38.495171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.508013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.508028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.521385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.521400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.535276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.535291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.547920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.547935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.561212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.561226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.574796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.574811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.588824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.588839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.601378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.601392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.614176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.614191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.628121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.628136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.642146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.642161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.655261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.655276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.668530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.668549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.681832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.681847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.695320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.695335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.708345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.708359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.720561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.720578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.734387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.734403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.747823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.747838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.233 [2024-11-15 10:49:38.761344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.233 [2024-11-15 10:49:38.761359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.773994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.774009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.787704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.787718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.800274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.800289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.813810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.813825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.826550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.826570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.839335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.839350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.853016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.853031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.865869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.865884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.878677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.878692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.891823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.891838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.904887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.904902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.917429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.917444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.930181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.493 [2024-11-15 10:49:38.930195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.493 [2024-11-15 10:49:38.942983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:38.942999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.494 [2024-11-15 10:49:38.956275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:38.956290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.494 [2024-11-15 10:49:38.969912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:38.969927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.494 [2024-11-15 10:49:38.982929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:38.982943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.494 [2024-11-15 10:49:38.995694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:38.995709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.494 [2024-11-15 10:49:39.008885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:39.008901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.494 [2024-11-15 10:49:39.021774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.494 [2024-11-15 10:49:39.021790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.034817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.034833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.048080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.048095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.061917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.061932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.075787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.075802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.088644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.088659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.102186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.102201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.115238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.115253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.127948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.127963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.141473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.141488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.154982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.154997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.168254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.168269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.182038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.182054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.195167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.195181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.208772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.208788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.221460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.221475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 18889.00 IOPS, 147.57 MiB/s [2024-11-15T09:49:39.282Z] [2024-11-15 10:49:39.234199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.234214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.246560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.246580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.258986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.259001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.272436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.272452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.755 [2024-11-15 10:49:39.284910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.755 [2024-11-15 10:49:39.284925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.297429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.297444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.310782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.310797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.324161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.324175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.337673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.337689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.350664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.350679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.364279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.364294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.377552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.377573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.390820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.390835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.404553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.404573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.417926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.417941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.431190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.431204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.444711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.444726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.458313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.458328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.471402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.471417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.484065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.484079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.496588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.496603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.509999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.510014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.523866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.523881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.016 [2024-11-15 10:49:39.536580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.016 [2024-11-15 10:49:39.536594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.550471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.550487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.563304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.563318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.576933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.576948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.589496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.589510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.602845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.602859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.615727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.615742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.629620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.629643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.642186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.642201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.655507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.655531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.669361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.669376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.683038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.683052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.696336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.696350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.709211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.709226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.722453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.722467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.735870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.735885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.748868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.276 [2024-11-15 10:49:39.748882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.276 [2024-11-15 10:49:39.762581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.277 [2024-11-15 10:49:39.762596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.277 [2024-11-15 10:49:39.776091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.277 [2024-11-15 10:49:39.776106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.277 [2024-11-15 10:49:39.789482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.277 [2024-11-15 10:49:39.789496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.277 [2024-11-15 10:49:39.802366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.277 [2024-11-15 10:49:39.802381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.536 [2024-11-15 10:49:39.815992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.536 [2024-11-15 10:49:39.816007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.536 [2024-11-15 10:49:39.828727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.536 [2024-11-15 10:49:39.828741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.536 [2024-11-15 10:49:39.841870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.536 [2024-11-15 10:49:39.841884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.855296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.855311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.869173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.869188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.882970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.882985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.895572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.895587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.909171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.909191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.922444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.922459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.935425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.935440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.949223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.949237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.962099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.962114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.974870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.974885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:39.987621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:39.987636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:40.000378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:40.000392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:40.013929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:40.013946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:40.026518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:40.026534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:40.039866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:40.039881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:40.052669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:40.052684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.537 [2024-11-15 10:49:40.065028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.537 [2024-11-15 10:49:40.065043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.078097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.078113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.091264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.091279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.104537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.104552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.117596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.117610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.130614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.130629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.143754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.143770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.156902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.156921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.169782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.169797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.183662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.183677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.196980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.196994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.210715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.210729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.223678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.223693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 18974.50 IOPS, 148.24 MiB/s [2024-11-15T09:49:40.324Z] [2024-11-15 10:49:40.237111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.237126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.250389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.250404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.262670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.262685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.276119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.276134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.288879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.288893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.301977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.301991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.797 [2024-11-15 10:49:40.314378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.797 [2024-11-15 10:49:40.314392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.057 [2024-11-15 10:49:40.327448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.057 [2024-11-15 10:49:40.327464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.057 [2024-11-15 10:49:40.340551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.057 [2024-11-15 10:49:40.340568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.057 [2024-11-15 10:49:40.354003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.057 [2024-11-15 10:49:40.354018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.057 [2024-11-15 10:49:40.367892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.057 [2024-11-15 10:49:40.367907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.381225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.381239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.394638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.394652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.407801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.407816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.420513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.420528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.433037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.433052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.447099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.447114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.459903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.459918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.473289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.473303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.486757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.486772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.500004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.500020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.512732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.512747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.525481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.525497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.539131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.539147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.552426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.552440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.566158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.566173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.058 [2024-11-15 10:49:40.578691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.058 [2024-11-15 10:49:40.578706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.591926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.591942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.604459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.604474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.617200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.617215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.629896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.629910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.642396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.642411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.655381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.655396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.667957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.667972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.681860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.681876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.695779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.695794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.708778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.708793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.721348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.721363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.734176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.734191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.747748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.747763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.760666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.760682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.773645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.773660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.786808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.786823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.800012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.800027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.813011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.813026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.826618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.826634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.318 [2024-11-15 10:49:40.840313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.318 [2024-11-15 10:49:40.840328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.854058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.854073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.867549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.867570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.880732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.880747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.894319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.894334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.907809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.907824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.920704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.920719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.933551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.933575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.947011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.947026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.959977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.959992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.972927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.972942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.986259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.986274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:40.999051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:40.999066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.011761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.011776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.025131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.025146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.037926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.037941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.050662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.050678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.064232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.064247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.077020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.077035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.090805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.090820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.579 [2024-11-15 10:49:41.103258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.579 [2024-11-15 10:49:41.103273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.116098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.116113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.129854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.129869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.143663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.143683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.156342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.156357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.169366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.169382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.182352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.182367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.195383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.195398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.208267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.208281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.222156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.222170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.234743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.234758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 19005.33 IOPS, 148.48 MiB/s [2024-11-15T09:49:41.366Z] [2024-11-15 10:49:41.248015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.248030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.260801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.260815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.274341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.274356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.287062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.287077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.300483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.300497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.313616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.313631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.327735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.327750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.338589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.338603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.351661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.351676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.839 [2024-11-15 10:49:41.365176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.839 [2024-11-15 10:49:41.365192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.378814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.378829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.392550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.392573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.405517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.405532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.418095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.418110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.431419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.431434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.444902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.444917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.458021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.458035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.471246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.471261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.484779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.484793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.498407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.498422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.511695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.511710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.524639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.524654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.099 [2024-11-15 10:49:41.537561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.099 [2024-11-15 10:49:41.537579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.550164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.550179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.563054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.563068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.575890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.575905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.589585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.589600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.603084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.603098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.615798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.615813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.100 [2024-11-15 10:49:41.629252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.100 [2024-11-15 10:49:41.629267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.642678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.642696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.656514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.656529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.669163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.669177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.682886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.682901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.695696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.695711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.708206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.708220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.720632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.720647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.733409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.733423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.746017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.746032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.759478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.759492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.773210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.773225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.786634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.786649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.800237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.800251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.813889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.813904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.826600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.826614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.839554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.839573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.852619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.852634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.865298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.865312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.360 [2024-11-15 10:49:41.878688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.360 [2024-11-15 10:49:41.878704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.892359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.892378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.905165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.905179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.918038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.918053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.930713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.930727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.944261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.944276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.956745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.956759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.969133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.969149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.982599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.982614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:41.996370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:41.996385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.009242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.009257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.022843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.022858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.036534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.036548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.049029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.049043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.061907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.061922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.074776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.074791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.088131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.088145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.101644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.101658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.114944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.114958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.128720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.128734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.621 [2024-11-15 10:49:42.141639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.621 [2024-11-15 10:49:42.141653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.881 [2024-11-15 10:49:42.155067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.881 [2024-11-15 10:49:42.155082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.881 [2024-11-15 10:49:42.168319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.881 [2024-11-15 10:49:42.168334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.881 [2024-11-15 10:49:42.181762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.181777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.194208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.194222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.207088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.207103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.220366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.220382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.233864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.233880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 19037.25 IOPS, 148.73 MiB/s [2024-11-15T09:49:42.409Z] [2024-11-15 10:49:42.247469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.247484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.260767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.260782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.274537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.274552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.287602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.287617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.301164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.301179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.314604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.314619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.328146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.328161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.341087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.341102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.354974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.354989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.367738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.367753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.381381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.381396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.394322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.394337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.882 [2024-11-15 10:49:42.407878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.882 [2024-11-15 10:49:42.407893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.421112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.421128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.434925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.434941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.448578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.448593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.462441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.462456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.476326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.476341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.489172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.489187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.501969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.501984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.515569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.515589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.528238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.528252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.541856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.541872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.554743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.554759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.567730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.567745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.581331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.581346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.594010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.594026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.607140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.607155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.621085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.621100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.633598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.633621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.646950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.646965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.142 [2024-11-15 10:49:42.660353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.142 [2024-11-15 10:49:42.660367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.673772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.673788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.686854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.686869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.699381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.699396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.712441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.712456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.725824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.725839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.739620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.739635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.753015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.753030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.766730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.766745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.779786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.779802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.793554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.793574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.806305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.806320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.818826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.818841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.832313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.832328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.844844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.844860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.858495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.858510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.872146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.872161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.885716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.885736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.899509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.899524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.912914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.912929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.403 [2024-11-15 10:49:42.925806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.403 [2024-11-15 10:49:42.925821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:42.938769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:42.938785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:42.951880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:42.951895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:42.965298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:42.965313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:42.978740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:42.978755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:42.992401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:42.992416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.005391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.005406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.018577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.018592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.031880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.031895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.045343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.045358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.058025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.058039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.070495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.070510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.083249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.083264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.096549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.096568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.109925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.109940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.122687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.122701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.136156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.136175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.149336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.149350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.162611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.162625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.176306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.176321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.663 [2024-11-15 10:49:43.189365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.663 [2024-11-15 10:49:43.189380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.924 [2024-11-15 10:49:43.202984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.924 [2024-11-15 10:49:43.202999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.924 [2024-11-15 10:49:43.215509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.924 [2024-11-15 10:49:43.215524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.924 [2024-11-15 10:49:43.228279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.924 [2024-11-15 10:49:43.228293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.924 [2024-11-15 10:49:43.240943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.924 [2024-11-15 10:49:43.240958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.924 19047.80 IOPS, 148.81 MiB/s 00:09:23.924 Latency(us) 00:09:23.924 [2024-11-15T09:49:43.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.925 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:23.925 Nvme1n1 : 5.01 19049.02 148.82 0.00 0.00 6713.99 2962.77 14308.69 00:09:23.925 [2024-11-15T09:49:43.452Z] =================================================================================================================== 00:09:23.925 [2024-11-15T09:49:43.452Z] Total : 19049.02 148.82 0.00 0.00 6713.99 2962.77 14308.69 00:09:23.925 [2024-11-15 10:49:43.250747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.250762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.262776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.262790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.274813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.274824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.286839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.286852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.298868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.298879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.310897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.310906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.322926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.322935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.334961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.334972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 [2024-11-15 10:49:43.346990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.925 [2024-11-15 10:49:43.346997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (237364) - No such process 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 237364 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.925 delay0 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.925 10:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:24.185 [2024-11-15 10:49:43.564733] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:30.768 Initializing NVMe Controllers 00:09:30.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:30.768 Initialization complete. Launching workers. 00:09:30.768 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 124 00:09:30.768 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 414, failed to submit 30 00:09:30.768 success 230, unsuccessful 184, failed 0 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.768 rmmod nvme_tcp 00:09:30.768 rmmod nvme_fabrics 00:09:30.768 rmmod nvme_keyring 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 235000 ']' 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 235000 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 235000 ']' 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 235000 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 235000 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 235000' 00:09:30.768 killing process with pid 235000 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 235000 00:09:30.768 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 235000 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.768 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.678 00:09:32.678 real 0m33.588s 00:09:32.678 user 0m44.142s 00:09:32.678 sys 0m11.420s 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.678 ************************************ 00:09:32.678 END TEST nvmf_zcopy 00:09:32.678 ************************************ 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.678 ************************************ 00:09:32.678 START TEST nvmf_nmic 00:09:32.678 ************************************ 00:09:32.678 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:32.939 * Looking for test storage... 00:09:32.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.939 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:32.939 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:32.939 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:32.939 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:32.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.940 --rc genhtml_branch_coverage=1 00:09:32.940 --rc genhtml_function_coverage=1 00:09:32.940 --rc genhtml_legend=1 00:09:32.940 --rc geninfo_all_blocks=1 00:09:32.940 --rc geninfo_unexecuted_blocks=1 00:09:32.940 00:09:32.940 ' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:32.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.940 --rc genhtml_branch_coverage=1 00:09:32.940 --rc genhtml_function_coverage=1 00:09:32.940 --rc genhtml_legend=1 00:09:32.940 --rc geninfo_all_blocks=1 00:09:32.940 --rc geninfo_unexecuted_blocks=1 00:09:32.940 00:09:32.940 ' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:32.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.940 --rc genhtml_branch_coverage=1 00:09:32.940 --rc genhtml_function_coverage=1 00:09:32.940 --rc genhtml_legend=1 00:09:32.940 --rc geninfo_all_blocks=1 00:09:32.940 --rc geninfo_unexecuted_blocks=1 00:09:32.940 00:09:32.940 ' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:32.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.940 --rc genhtml_branch_coverage=1 00:09:32.940 --rc genhtml_function_coverage=1 00:09:32.940 --rc genhtml_legend=1 00:09:32.940 --rc geninfo_all_blocks=1 00:09:32.940 --rc geninfo_unexecuted_blocks=1 00:09:32.940 00:09:32.940 ' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:32.940 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:32.941 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.081 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:41.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:41.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:41.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:41.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:09:41.082 00:09:41.082 --- 10.0.0.2 ping statistics --- 00:09:41.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.082 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:41.082 00:09:41.082 --- 10.0.0.1 ping statistics --- 00:09:41.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.082 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=243884 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 243884 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 243884 ']' 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.082 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.082 [2024-11-15 10:49:59.922047] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:09:41.082 [2024-11-15 10:49:59.922112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.082 [2024-11-15 10:50:00.024993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.082 [2024-11-15 10:50:00.087142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.082 [2024-11-15 10:50:00.087205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.082 [2024-11-15 10:50:00.087214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.082 [2024-11-15 10:50:00.087221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.082 [2024-11-15 10:50:00.087228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.082 [2024-11-15 10:50:00.089420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.082 [2024-11-15 10:50:00.089545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.082 [2024-11-15 10:50:00.089688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.082 [2024-11-15 10:50:00.089875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.344 [2024-11-15 10:50:00.809305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.344 Malloc0 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.344 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.606 [2024-11-15 10:50:00.888350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:41.606 test case1: single bdev can't be used in multiple subsystems 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:41.606 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.607 [2024-11-15 10:50:00.924172] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:41.607 [2024-11-15 10:50:00.924199] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:41.607 [2024-11-15 10:50:00.924207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.607 request: 00:09:41.607 { 00:09:41.607 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:41.607 "namespace": { 00:09:41.607 "bdev_name": "Malloc0", 00:09:41.607 "no_auto_visible": false, 00:09:41.607 "no_metadata": false 00:09:41.607 }, 00:09:41.607 "method": "nvmf_subsystem_add_ns", 00:09:41.607 "req_id": 1 00:09:41.607 } 00:09:41.607 Got JSON-RPC error response 00:09:41.607 response: 00:09:41.607 { 00:09:41.607 "code": -32602, 00:09:41.607 "message": "Invalid parameters" 00:09:41.607 } 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:41.607 Adding namespace failed - expected result. 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:41.607 test case2: host connect to nvmf target in multiple paths 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.607 [2024-11-15 10:50:00.936362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.607 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.990 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:44.902 10:50:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.902 10:50:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:44.902 10:50:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.902 10:50:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:44.902 10:50:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:46.810 10:50:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:46.810 [global] 00:09:46.810 thread=1 00:09:46.810 invalidate=1 00:09:46.810 rw=write 00:09:46.810 time_based=1 00:09:46.810 runtime=1 00:09:46.810 ioengine=libaio 00:09:46.810 direct=1 00:09:46.810 bs=4096 00:09:46.810 iodepth=1 00:09:46.810 norandommap=0 00:09:46.810 numjobs=1 00:09:46.810 00:09:46.810 verify_dump=1 00:09:46.810 verify_backlog=512 00:09:46.810 verify_state_save=0 00:09:46.810 do_verify=1 00:09:46.810 verify=crc32c-intel 00:09:46.810 [job0] 00:09:46.810 filename=/dev/nvme0n1 00:09:46.810 Could not set queue depth (nvme0n1) 00:09:46.810 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.810 fio-3.35 00:09:46.810 Starting 1 thread 00:09:48.192 00:09:48.192 job0: (groupid=0, jobs=1): err= 0: pid=245280: Fri Nov 15 10:50:07 2024 00:09:48.192 read: IOPS=15, BW=63.3KiB/s (64.8kB/s)(64.0KiB/1011msec) 00:09:48.192 slat (nsec): min=26502, max=27158, avg=26669.94, stdev=163.52 00:09:48.192 clat (usec): min=40992, max=42070, avg=41787.14, stdev=375.09 00:09:48.192 lat (usec): min=41019, max=42097, avg=41813.81, stdev=375.13 00:09:48.192 clat percentiles (usec): 00:09:48.193 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:48.193 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:48.193 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:48.193 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:48.193 | 99.99th=[42206] 00:09:48.193 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:48.193 slat (usec): min=10, max=26888, avg=83.54, stdev=1186.97 00:09:48.193 clat (usec): min=222, max=803, avg=577.86, stdev=106.22 00:09:48.193 lat (usec): min=237, max=27610, avg=661.39, stdev=1198.38 00:09:48.193 clat percentiles (usec): 00:09:48.193 | 1.00th=[ 314], 5.00th=[ 388], 10.00th=[ 420], 20.00th=[ 490], 00:09:48.193 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:09:48.193 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 725], 00:09:48.193 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 807], 99.95th=[ 807], 00:09:48.193 | 99.99th=[ 807] 00:09:48.193 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:48.193 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:48.193 lat (usec) : 250=0.57%, 500=21.59%, 750=73.30%, 1000=1.52% 00:09:48.193 lat (msec) : 50=3.03% 00:09:48.193 cpu : usr=0.89%, sys=1.39%, ctx=530, majf=0, minf=1 00:09:48.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.193 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.193 00:09:48.193 Run status group 0 (all jobs): 00:09:48.193 READ: bw=63.3KiB/s (64.8kB/s), 63.3KiB/s-63.3KiB/s (64.8kB/s-64.8kB/s), io=64.0KiB (65.5kB), run=1011-1011msec 00:09:48.193 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:09:48.193 00:09:48.193 Disk stats (read/write): 00:09:48.193 nvme0n1: ios=38/512, merge=0/0, ticks=1509/276, in_queue=1785, util=98.90% 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.193 rmmod nvme_tcp 00:09:48.193 rmmod nvme_fabrics 00:09:48.193 rmmod nvme_keyring 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 243884 ']' 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 243884 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 243884 ']' 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 243884 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:48.193 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 243884 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 243884' 00:09:48.453 killing process with pid 243884 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 243884 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 243884 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.453 10:50:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.998 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.998 00:09:50.998 real 0m17.801s 00:09:50.998 user 0m47.257s 00:09:50.998 sys 0m6.541s 00:09:50.998 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:50.998 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:50.998 ************************************ 00:09:50.998 END TEST nvmf_nmic 00:09:50.998 ************************************ 00:09:50.998 10:50:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.998 10:50:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:50.998 10:50:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:50.998 10:50:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.998 ************************************ 00:09:50.998 START TEST nvmf_fio_target 00:09:50.998 ************************************ 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.999 * Looking for test storage... 00:09:50.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:50.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.999 --rc genhtml_branch_coverage=1 00:09:50.999 --rc genhtml_function_coverage=1 00:09:50.999 --rc genhtml_legend=1 00:09:50.999 --rc geninfo_all_blocks=1 00:09:50.999 --rc geninfo_unexecuted_blocks=1 00:09:50.999 00:09:50.999 ' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:50.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.999 --rc genhtml_branch_coverage=1 00:09:50.999 --rc genhtml_function_coverage=1 00:09:50.999 --rc genhtml_legend=1 00:09:50.999 --rc geninfo_all_blocks=1 00:09:50.999 --rc geninfo_unexecuted_blocks=1 00:09:50.999 00:09:50.999 ' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:50.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.999 --rc genhtml_branch_coverage=1 00:09:50.999 --rc genhtml_function_coverage=1 00:09:50.999 --rc genhtml_legend=1 00:09:50.999 --rc geninfo_all_blocks=1 00:09:50.999 --rc geninfo_unexecuted_blocks=1 00:09:50.999 00:09:50.999 ' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:50.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.999 --rc genhtml_branch_coverage=1 00:09:50.999 --rc genhtml_function_coverage=1 00:09:50.999 --rc genhtml_legend=1 00:09:50.999 --rc geninfo_all_blocks=1 00:09:50.999 --rc geninfo_unexecuted_blocks=1 00:09:50.999 00:09:50.999 ' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.999 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:59.144 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:59.144 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.144 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:59.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:59.145 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:09:59.145 00:09:59.145 --- 10.0.0.2 ping statistics --- 00:09:59.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.145 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:09:59.145 00:09:59.145 --- 10.0.0.1 ping statistics --- 00:09:59.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.145 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=249950 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 249950 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 249950 ']' 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.145 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.145 [2024-11-15 10:50:17.897092] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:09:59.145 [2024-11-15 10:50:17.897155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.145 [2024-11-15 10:50:17.997893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.145 [2024-11-15 10:50:18.050879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.145 [2024-11-15 10:50:18.050933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.145 [2024-11-15 10:50:18.050941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.145 [2024-11-15 10:50:18.050948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.145 [2024-11-15 10:50:18.050954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.145 [2024-11-15 10:50:18.053366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.145 [2024-11-15 10:50:18.053507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.145 [2024-11-15 10:50:18.053668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.145 [2024-11-15 10:50:18.053668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.406 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.406 [2024-11-15 10:50:18.936164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.668 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.928 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:59.928 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.928 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:59.928 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.189 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:00.189 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.449 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:00.449 10:50:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:00.708 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.968 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:00.968 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.968 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:00.968 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.228 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:01.228 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:01.487 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.747 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:01.747 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.747 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:01.747 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.008 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.267 [2024-11-15 10:50:21.573614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.267 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:02.267 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:02.527 10:50:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.438 10:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:04.438 10:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:04.438 10:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.438 10:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:04.438 10:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:04.438 10:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:06.351 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:06.351 [global] 00:10:06.351 thread=1 00:10:06.351 invalidate=1 00:10:06.351 rw=write 00:10:06.351 time_based=1 00:10:06.351 runtime=1 00:10:06.351 ioengine=libaio 00:10:06.351 direct=1 00:10:06.351 bs=4096 00:10:06.351 iodepth=1 00:10:06.351 norandommap=0 00:10:06.351 numjobs=1 00:10:06.351 00:10:06.351 verify_dump=1 00:10:06.351 verify_backlog=512 00:10:06.351 verify_state_save=0 00:10:06.351 do_verify=1 00:10:06.351 verify=crc32c-intel 00:10:06.351 [job0] 00:10:06.351 filename=/dev/nvme0n1 00:10:06.351 [job1] 00:10:06.351 filename=/dev/nvme0n2 00:10:06.351 [job2] 00:10:06.351 filename=/dev/nvme0n3 00:10:06.351 [job3] 00:10:06.351 filename=/dev/nvme0n4 00:10:06.351 Could not set queue depth (nvme0n1) 00:10:06.351 Could not set queue depth (nvme0n2) 00:10:06.351 Could not set queue depth (nvme0n3) 00:10:06.351 Could not set queue depth (nvme0n4) 00:10:06.611 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.611 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.611 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.611 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.611 fio-3.35 00:10:06.611 Starting 4 threads 00:10:08.073 00:10:08.073 job0: (groupid=0, jobs=1): err= 0: pid=251658: Fri Nov 15 10:50:27 2024 00:10:08.073 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:08.073 slat (nsec): min=7083, max=59013, avg=26034.16, stdev=2755.74 00:10:08.073 clat (usec): min=380, max=1199, avg=939.37, stdev=98.86 00:10:08.073 lat (usec): min=406, max=1225, avg=965.40, stdev=99.37 00:10:08.073 clat percentiles (usec): 00:10:08.073 | 1.00th=[ 529], 5.00th=[ 742], 10.00th=[ 840], 20.00th=[ 898], 00:10:08.073 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:10:08.073 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:10:08.073 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:08.073 | 99.99th=[ 1205] 00:10:08.073 write: IOPS=904, BW=3616KiB/s (3703kB/s)(3620KiB/1001msec); 0 zone resets 00:10:08.073 slat (nsec): min=9854, max=64822, avg=30684.07, stdev=9749.13 00:10:08.073 clat (usec): min=137, max=1757, avg=515.98, stdev=143.85 00:10:08.073 lat (usec): min=148, max=1790, avg=546.67, stdev=147.11 00:10:08.073 clat percentiles (usec): 00:10:08.073 | 1.00th=[ 235], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 383], 00:10:08.073 | 30.00th=[ 441], 40.00th=[ 478], 50.00th=[ 537], 60.00th=[ 562], 00:10:08.073 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 734], 00:10:08.073 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 1762], 99.95th=[ 1762], 00:10:08.073 | 99.99th=[ 1762] 00:10:08.073 bw ( KiB/s): min= 4096, max= 4096, per=43.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.073 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.073 lat (usec) : 250=1.48%, 500=26.68%, 750=35.22%, 1000=29.29% 00:10:08.073 lat (msec) : 2=7.34% 00:10:08.073 cpu : usr=2.10%, sys=4.10%, ctx=1420, majf=0, minf=1 00:10:08.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.073 issued rwts: total=512,905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.073 job1: (groupid=0, jobs=1): err= 0: pid=251672: Fri Nov 15 10:50:27 2024 00:10:08.073 read: IOPS=18, BW=75.8KiB/s (77.6kB/s)(76.0KiB/1003msec) 00:10:08.073 slat (nsec): min=10379, max=27314, avg=26233.89, stdev=3842.12 00:10:08.073 clat (usec): min=40752, max=41084, avg=40949.42, stdev=71.52 00:10:08.073 lat (usec): min=40763, max=41111, avg=40975.65, stdev=74.11 00:10:08.073 clat percentiles (usec): 00:10:08.073 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:08.073 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:08.073 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:08.073 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:08.073 | 99.99th=[41157] 00:10:08.073 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:08.073 slat (nsec): min=9793, max=55199, avg=26048.19, stdev=12447.06 00:10:08.073 clat (usec): min=151, max=1310, avg=403.07, stdev=102.56 00:10:08.073 lat (usec): min=162, max=1322, avg=429.12, stdev=110.93 00:10:08.073 clat percentiles (usec): 00:10:08.073 | 1.00th=[ 229], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 293], 00:10:08.074 | 30.00th=[ 330], 40.00th=[ 379], 50.00th=[ 420], 60.00th=[ 445], 00:10:08.074 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 537], 00:10:08.074 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 1303], 99.95th=[ 1303], 00:10:08.074 | 99.99th=[ 1303] 00:10:08.074 bw ( KiB/s): min= 4096, max= 4096, per=43.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.074 lat (usec) : 250=3.01%, 500=80.23%, 750=12.99% 00:10:08.074 lat (msec) : 2=0.19%, 50=3.58% 00:10:08.074 cpu : usr=0.40%, sys=1.50%, ctx=532, majf=0, minf=1 00:10:08.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.074 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.074 job2: (groupid=0, jobs=1): err= 0: pid=251691: Fri Nov 15 10:50:27 2024 00:10:08.074 read: IOPS=17, BW=70.1KiB/s (71.8kB/s)(72.0KiB/1027msec) 00:10:08.074 slat (nsec): min=25551, max=26176, avg=25754.50, stdev=142.01 00:10:08.074 clat (usec): min=1012, max=42092, avg=39527.16, stdev=9619.38 00:10:08.074 lat (usec): min=1037, max=42118, avg=39552.91, stdev=9619.39 00:10:08.074 clat percentiles (usec): 00:10:08.074 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41157], 00:10:08.074 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:08.074 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:08.074 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:08.074 | 99.99th=[42206] 00:10:08.074 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:08.074 slat (nsec): min=9674, max=68462, avg=31900.49, stdev=8118.37 00:10:08.074 clat (usec): min=177, max=980, avg=572.66, stdev=141.51 00:10:08.074 lat (usec): min=188, max=1014, avg=604.56, stdev=143.85 00:10:08.074 clat percentiles (usec): 00:10:08.074 | 1.00th=[ 265], 5.00th=[ 347], 10.00th=[ 392], 20.00th=[ 445], 00:10:08.074 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 611], 00:10:08.074 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 750], 95.00th=[ 824], 00:10:08.074 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 979], 99.95th=[ 979], 00:10:08.074 | 99.99th=[ 979] 00:10:08.074 bw ( KiB/s): min= 4096, max= 4096, per=43.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.074 lat (usec) : 250=0.75%, 500=28.68%, 750=56.98%, 1000=10.19% 00:10:08.074 lat (msec) : 2=0.19%, 50=3.21% 00:10:08.074 cpu : usr=0.97%, sys=1.36%, ctx=532, majf=0, minf=1 00:10:08.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.074 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.074 job3: (groupid=0, jobs=1): err= 0: pid=251698: Fri Nov 15 10:50:27 2024 00:10:08.074 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1020msec) 00:10:08.074 slat (nsec): min=26191, max=26766, avg=26435.53, stdev=151.49 00:10:08.074 clat (usec): min=40968, max=42090, avg=41794.51, stdev=364.90 00:10:08.074 lat (usec): min=40995, max=42116, avg=41820.94, stdev=364.88 00:10:08.074 clat percentiles (usec): 00:10:08.074 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:08.074 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:08.074 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:08.074 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:08.074 | 99.99th=[42206] 00:10:08.074 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:08.074 slat (nsec): min=10300, max=55360, avg=32789.44, stdev=8023.75 00:10:08.074 clat (usec): min=225, max=923, avg=560.04, stdev=121.35 00:10:08.074 lat (usec): min=261, max=957, avg=592.83, stdev=124.25 00:10:08.074 clat percentiles (usec): 00:10:08.074 | 1.00th=[ 277], 5.00th=[ 351], 10.00th=[ 404], 20.00th=[ 461], 00:10:08.074 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:10:08.074 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 750], 00:10:08.074 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 922], 00:10:08.074 | 99.99th=[ 922] 00:10:08.074 bw ( KiB/s): min= 4096, max= 4096, per=43.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.074 lat (usec) : 250=0.19%, 500=29.30%, 750=62.38%, 1000=4.91% 00:10:08.074 lat (msec) : 50=3.21% 00:10:08.074 cpu : usr=0.49%, sys=1.86%, ctx=530, majf=0, minf=1 00:10:08.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.074 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.074 00:10:08.074 Run status group 0 (all jobs): 00:10:08.074 READ: bw=2204KiB/s (2257kB/s), 66.7KiB/s-2046KiB/s (68.3kB/s-2095kB/s), io=2264KiB (2318kB), run=1001-1027msec 00:10:08.074 WRITE: bw=9507KiB/s (9735kB/s), 1994KiB/s-3616KiB/s (2042kB/s-3703kB/s), io=9764KiB (9998kB), run=1001-1027msec 00:10:08.074 00:10:08.074 Disk stats (read/write): 00:10:08.074 nvme0n1: ios=561/616, merge=0/0, ticks=1066/287, in_queue=1353, util=83.47% 00:10:08.074 nvme0n2: ios=63/512, merge=0/0, ticks=1102/196, in_queue=1298, util=87.06% 00:10:08.074 nvme0n3: ios=70/512, merge=0/0, ticks=635/271, in_queue=906, util=95.22% 00:10:08.074 nvme0n4: ios=34/512, merge=0/0, ticks=1381/281, in_queue=1662, util=93.97% 00:10:08.074 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:08.074 [global] 00:10:08.074 thread=1 00:10:08.074 invalidate=1 00:10:08.074 rw=randwrite 00:10:08.074 time_based=1 00:10:08.074 runtime=1 00:10:08.074 ioengine=libaio 00:10:08.074 direct=1 00:10:08.074 bs=4096 00:10:08.074 iodepth=1 00:10:08.074 norandommap=0 00:10:08.074 numjobs=1 00:10:08.074 00:10:08.074 verify_dump=1 00:10:08.074 verify_backlog=512 00:10:08.074 verify_state_save=0 00:10:08.074 do_verify=1 00:10:08.074 verify=crc32c-intel 00:10:08.074 [job0] 00:10:08.074 filename=/dev/nvme0n1 00:10:08.074 [job1] 00:10:08.074 filename=/dev/nvme0n2 00:10:08.074 [job2] 00:10:08.074 filename=/dev/nvme0n3 00:10:08.074 [job3] 00:10:08.074 filename=/dev/nvme0n4 00:10:08.074 Could not set queue depth (nvme0n1) 00:10:08.074 Could not set queue depth (nvme0n2) 00:10:08.074 Could not set queue depth (nvme0n3) 00:10:08.074 Could not set queue depth (nvme0n4) 00:10:08.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.369 fio-3.35 00:10:08.369 Starting 4 threads 00:10:09.336 00:10:09.336 job0: (groupid=0, jobs=1): err= 0: pid=252159: Fri Nov 15 10:50:28 2024 00:10:09.336 read: IOPS=17, BW=71.9KiB/s (73.6kB/s)(72.0KiB/1002msec) 00:10:09.336 slat (nsec): min=26732, max=32735, avg=27301.44, stdev=1367.01 00:10:09.336 clat (usec): min=980, max=42068, avg=39339.82, stdev=9583.60 00:10:09.336 lat (usec): min=1008, max=42095, avg=39367.12, stdev=9583.56 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[41157], 20.00th=[41157], 00:10:09.336 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:09.336 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:09.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:09.336 | 99.99th=[42206] 00:10:09.336 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:09.336 slat (nsec): min=9357, max=54899, avg=29254.85, stdev=10264.16 00:10:09.336 clat (usec): min=146, max=946, avg=535.24, stdev=154.19 00:10:09.336 lat (usec): min=158, max=980, avg=564.50, stdev=158.27 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[ 212], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[ 400], 00:10:09.336 | 30.00th=[ 453], 40.00th=[ 502], 50.00th=[ 545], 60.00th=[ 586], 00:10:09.336 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 791], 00:10:09.336 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 947], 00:10:09.336 | 99.99th=[ 947] 00:10:09.336 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:09.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:09.336 lat (usec) : 250=2.45%, 500=36.04%, 750=50.00%, 1000=8.30% 00:10:09.336 lat (msec) : 50=3.21% 00:10:09.336 cpu : usr=0.80%, sys=1.50%, ctx=532, majf=0, minf=1 00:10:09.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.336 job1: (groupid=0, jobs=1): err= 0: pid=252177: Fri Nov 15 10:50:28 2024 00:10:09.336 read: IOPS=17, BW=71.2KiB/s (72.9kB/s)(72.0KiB/1011msec) 00:10:09.336 slat (nsec): min=25167, max=25692, avg=25318.28, stdev=129.32 00:10:09.336 clat (usec): min=40915, max=42006, avg=41112.65, stdev=335.46 00:10:09.336 lat (usec): min=40940, max=42031, avg=41137.96, stdev=335.51 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:09.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:09.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:09.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:09.336 | 99.99th=[42206] 00:10:09.336 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:09.336 slat (nsec): min=9197, max=53088, avg=29242.10, stdev=7862.79 00:10:09.336 clat (usec): min=140, max=837, avg=490.36, stdev=123.40 00:10:09.336 lat (usec): min=150, max=868, avg=519.61, stdev=125.72 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[ 237], 5.00th=[ 277], 10.00th=[ 326], 20.00th=[ 375], 00:10:09.336 | 30.00th=[ 424], 40.00th=[ 465], 50.00th=[ 490], 60.00th=[ 519], 00:10:09.336 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 676], 00:10:09.336 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 840], 99.95th=[ 840], 00:10:09.336 | 99.99th=[ 840] 00:10:09.336 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:09.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:09.336 lat (usec) : 250=1.70%, 500=50.19%, 750=42.83%, 1000=1.89% 00:10:09.336 lat (msec) : 50=3.40% 00:10:09.336 cpu : usr=0.59%, sys=1.58%, ctx=531, majf=0, minf=2 00:10:09.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.336 job2: (groupid=0, jobs=1): err= 0: pid=252197: Fri Nov 15 10:50:28 2024 00:10:09.336 read: IOPS=17, BW=69.8KiB/s (71.4kB/s)(72.0KiB/1032msec) 00:10:09.336 slat (nsec): min=26388, max=27516, avg=26684.89, stdev=296.81 00:10:09.336 clat (usec): min=40889, max=42015, avg=41526.66, stdev=495.37 00:10:09.336 lat (usec): min=40915, max=42042, avg=41553.35, stdev=495.26 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:09.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:09.336 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:09.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:09.336 | 99.99th=[42206] 00:10:09.336 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:09.336 slat (nsec): min=8708, max=64551, avg=29208.64, stdev=9569.89 00:10:09.336 clat (usec): min=183, max=896, avg=517.22, stdev=144.39 00:10:09.336 lat (usec): min=193, max=928, avg=546.43, stdev=148.74 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[ 227], 5.00th=[ 269], 10.00th=[ 338], 20.00th=[ 379], 00:10:09.336 | 30.00th=[ 437], 40.00th=[ 474], 50.00th=[ 515], 60.00th=[ 562], 00:10:09.336 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 742], 00:10:09.336 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 898], 99.95th=[ 898], 00:10:09.336 | 99.99th=[ 898] 00:10:09.336 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:09.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:09.336 lat (usec) : 250=3.96%, 500=41.32%, 750=47.55%, 1000=3.77% 00:10:09.336 lat (msec) : 50=3.40% 00:10:09.336 cpu : usr=1.07%, sys=1.75%, ctx=530, majf=0, minf=1 00:10:09.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.336 job3: (groupid=0, jobs=1): err= 0: pid=252204: Fri Nov 15 10:50:28 2024 00:10:09.336 read: IOPS=19, BW=77.9KiB/s (79.8kB/s)(80.0KiB/1027msec) 00:10:09.336 slat (nsec): min=26477, max=27297, avg=26741.70, stdev=211.53 00:10:09.336 clat (usec): min=1184, max=42001, avg=39799.94, stdev=9094.80 00:10:09.336 lat (usec): min=1211, max=42028, avg=39826.68, stdev=9094.77 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41157], 20.00th=[41157], 00:10:09.336 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:09.336 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:09.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:09.336 | 99.99th=[42206] 00:10:09.336 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:09.336 slat (nsec): min=9731, max=52041, avg=20121.28, stdev=11899.71 00:10:09.336 clat (usec): min=114, max=1030, avg=423.93, stdev=168.66 00:10:09.336 lat (usec): min=124, max=1065, avg=444.05, stdev=177.46 00:10:09.336 clat percentiles (usec): 00:10:09.336 | 1.00th=[ 143], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 277], 00:10:09.336 | 30.00th=[ 289], 40.00th=[ 322], 50.00th=[ 383], 60.00th=[ 445], 00:10:09.336 | 70.00th=[ 515], 80.00th=[ 578], 90.00th=[ 660], 95.00th=[ 725], 00:10:09.336 | 99.00th=[ 914], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:09.336 | 99.99th=[ 1029] 00:10:09.336 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:09.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:09.336 lat (usec) : 250=5.64%, 500=59.96%, 750=27.82%, 1000=2.44% 00:10:09.336 lat (msec) : 2=0.56%, 50=3.57% 00:10:09.336 cpu : usr=0.58%, sys=0.88%, ctx=534, majf=0, minf=1 00:10:09.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.336 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.336 00:10:09.336 Run status group 0 (all jobs): 00:10:09.336 READ: bw=287KiB/s (294kB/s), 69.8KiB/s-77.9KiB/s (71.4kB/s-79.8kB/s), io=296KiB (303kB), run=1002-1032msec 00:10:09.336 WRITE: bw=7938KiB/s (8128kB/s), 1984KiB/s-2044KiB/s (2032kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1032msec 00:10:09.336 00:10:09.336 Disk stats (read/write): 00:10:09.336 nvme0n1: ios=43/512, merge=0/0, ticks=855/256, in_queue=1111, util=99.70% 00:10:09.336 nvme0n2: ios=49/512, merge=0/0, ticks=620/234, in_queue=854, util=88.38% 00:10:09.336 nvme0n3: ios=62/512, merge=0/0, ticks=694/213, in_queue=907, util=100.00% 00:10:09.336 nvme0n4: ios=37/512, merge=0/0, ticks=1505/207, in_queue=1712, util=96.79% 00:10:09.336 10:50:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:09.597 [global] 00:10:09.597 thread=1 00:10:09.597 invalidate=1 00:10:09.597 rw=write 00:10:09.597 time_based=1 00:10:09.597 runtime=1 00:10:09.597 ioengine=libaio 00:10:09.597 direct=1 00:10:09.597 bs=4096 00:10:09.597 iodepth=128 00:10:09.597 norandommap=0 00:10:09.597 numjobs=1 00:10:09.597 00:10:09.597 verify_dump=1 00:10:09.597 verify_backlog=512 00:10:09.597 verify_state_save=0 00:10:09.597 do_verify=1 00:10:09.597 verify=crc32c-intel 00:10:09.597 [job0] 00:10:09.597 filename=/dev/nvme0n1 00:10:09.597 [job1] 00:10:09.597 filename=/dev/nvme0n2 00:10:09.597 [job2] 00:10:09.597 filename=/dev/nvme0n3 00:10:09.597 [job3] 00:10:09.597 filename=/dev/nvme0n4 00:10:09.597 Could not set queue depth (nvme0n1) 00:10:09.597 Could not set queue depth (nvme0n2) 00:10:09.597 Could not set queue depth (nvme0n3) 00:10:09.597 Could not set queue depth (nvme0n4) 00:10:09.857 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.857 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.857 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.857 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.857 fio-3.35 00:10:09.857 Starting 4 threads 00:10:11.312 00:10:11.312 job0: (groupid=0, jobs=1): err= 0: pid=252645: Fri Nov 15 10:50:30 2024 00:10:11.312 read: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1005msec) 00:10:11.312 slat (nsec): min=967, max=12495k, avg=83394.33, stdev=582733.90 00:10:11.312 clat (usec): min=2725, max=31632, avg=9956.66, stdev=3718.82 00:10:11.312 lat (usec): min=4115, max=31634, avg=10040.06, stdev=3759.08 00:10:11.312 clat percentiles (usec): 00:10:11.312 | 1.00th=[ 4621], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7898], 00:10:11.312 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 9372], 00:10:11.312 | 70.00th=[10290], 80.00th=[11863], 90.00th=[14877], 95.00th=[17695], 00:10:11.312 | 99.00th=[23200], 99.50th=[26870], 99.90th=[31065], 99.95th=[31589], 00:10:11.312 | 99.99th=[31589] 00:10:11.313 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:11.313 slat (nsec): min=1720, max=11304k, avg=91079.20, stdev=456222.98 00:10:11.313 clat (usec): min=2537, max=31635, avg=12928.49, stdev=6108.54 00:10:11.313 lat (usec): min=2545, max=31638, avg=13019.56, stdev=6151.36 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 3228], 5.00th=[ 4686], 10.00th=[ 5604], 20.00th=[ 6521], 00:10:11.313 | 30.00th=[ 7963], 40.00th=[10683], 50.00th=[12125], 60.00th=[14091], 00:10:11.313 | 70.00th=[16057], 80.00th=[19268], 90.00th=[21890], 95.00th=[23462], 00:10:11.313 | 99.00th=[26608], 99.50th=[26870], 99.90th=[29754], 99.95th=[29754], 00:10:11.313 | 99.99th=[31589] 00:10:11.313 bw ( KiB/s): min=21808, max=23248, per=24.50%, avg=22528.00, stdev=1018.23, samples=2 00:10:11.313 iops : min= 5452, max= 5812, avg=5632.00, stdev=254.56, samples=2 00:10:11.313 lat (msec) : 4=1.41%, 10=50.45%, 20=38.13%, 50=10.01% 00:10:11.313 cpu : usr=3.98%, sys=6.37%, ctx=561, majf=0, minf=1 00:10:11.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:11.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.313 issued rwts: total=5488,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.313 job1: (groupid=0, jobs=1): err= 0: pid=252654: Fri Nov 15 10:50:30 2024 00:10:11.313 read: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(15.0MiB/1045msec) 00:10:11.313 slat (nsec): min=892, max=16932k, avg=102606.32, stdev=697858.79 00:10:11.313 clat (usec): min=3610, max=67035, avg=13110.85, stdev=9468.62 00:10:11.313 lat (usec): min=3619, max=71200, avg=13213.46, stdev=9514.55 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 4883], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8717], 00:10:11.313 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:10:11.313 | 70.00th=[11600], 80.00th=[13566], 90.00th=[20317], 95.00th=[25035], 00:10:11.313 | 99.00th=[61604], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:10:11.313 | 99.99th=[66847] 00:10:11.313 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:10:11.313 slat (nsec): min=1545, max=17796k, avg=143142.18, stdev=804580.48 00:10:11.313 clat (usec): min=2354, max=69130, avg=19884.86, stdev=12733.56 00:10:11.313 lat (usec): min=2364, max=69139, avg=20028.00, stdev=12817.11 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 4359], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[ 9503], 00:10:11.313 | 30.00th=[11863], 40.00th=[13698], 50.00th=[14615], 60.00th=[18744], 00:10:11.313 | 70.00th=[22938], 80.00th=[30016], 90.00th=[37487], 95.00th=[46400], 00:10:11.313 | 99.00th=[65274], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:10:11.313 | 99.99th=[68682] 00:10:11.313 bw ( KiB/s): min=14264, max=18504, per=17.82%, avg=16384.00, stdev=2998.13, samples=2 00:10:11.313 iops : min= 3566, max= 4626, avg=4096.00, stdev=749.53, samples=2 00:10:11.313 lat (msec) : 4=0.54%, 10=27.91%, 20=48.24%, 50=19.75%, 100=3.55% 00:10:11.313 cpu : usr=2.59%, sys=3.74%, ctx=503, majf=0, minf=1 00:10:11.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:11.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.313 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.313 job2: (groupid=0, jobs=1): err= 0: pid=252679: Fri Nov 15 10:50:30 2024 00:10:11.313 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:10:11.313 slat (nsec): min=912, max=11125k, avg=79003.90, stdev=533844.02 00:10:11.313 clat (usec): min=3565, max=30252, avg=10468.59, stdev=4014.69 00:10:11.313 lat (usec): min=3572, max=30265, avg=10547.60, stdev=4059.83 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:10:11.313 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:10:11.313 | 70.00th=[10421], 80.00th=[11994], 90.00th=[16581], 95.00th=[20841], 00:10:11.313 | 99.00th=[25560], 99.50th=[26084], 99.90th=[27132], 99.95th=[28967], 00:10:11.313 | 99.99th=[30278] 00:10:11.313 write: IOPS=7108, BW=27.8MiB/s (29.1MB/s)(27.9MiB/1003msec); 0 zone resets 00:10:11.313 slat (nsec): min=1583, max=6835.3k, avg=60007.46, stdev=390307.93 00:10:11.313 clat (usec): min=584, max=23920, avg=8077.09, stdev=2591.35 00:10:11.313 lat (usec): min=784, max=23929, avg=8137.10, stdev=2612.38 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 2704], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5604], 00:10:11.313 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8455], 00:10:11.313 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[10945], 95.00th=[12256], 00:10:11.313 | 99.00th=[14746], 99.50th=[17957], 99.90th=[23200], 99.95th=[23200], 00:10:11.313 | 99.99th=[23987] 00:10:11.313 bw ( KiB/s): min=23256, max=32768, per=30.47%, avg=28012.00, stdev=6726.00, samples=2 00:10:11.313 iops : min= 5814, max= 8192, avg=7003.00, stdev=1681.50, samples=2 00:10:11.313 lat (usec) : 750=0.01%, 1000=0.02% 00:10:11.313 lat (msec) : 2=0.25%, 4=1.73%, 10=70.75%, 20=24.54%, 50=2.70% 00:10:11.313 cpu : usr=4.49%, sys=7.49%, ctx=587, majf=0, minf=2 00:10:11.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:11.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.313 issued rwts: total=6656,7130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.313 job3: (groupid=0, jobs=1): err= 0: pid=252688: Fri Nov 15 10:50:30 2024 00:10:11.313 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:10:11.313 slat (nsec): min=944, max=15044k, avg=74507.57, stdev=532930.07 00:10:11.313 clat (usec): min=2750, max=57075, avg=9536.79, stdev=4145.94 00:10:11.313 lat (usec): min=2756, max=57081, avg=9611.30, stdev=4173.82 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 4293], 5.00th=[ 5932], 10.00th=[ 7373], 20.00th=[ 8160], 00:10:11.313 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:10:11.313 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[11994], 95.00th=[15533], 00:10:11.313 | 99.00th=[21103], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:10:11.313 | 99.99th=[56886] 00:10:11.313 write: IOPS=7140, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1003msec); 0 zone resets 00:10:11.313 slat (nsec): min=1564, max=10431k, avg=65212.44, stdev=394753.44 00:10:11.313 clat (usec): min=588, max=29276, avg=8870.44, stdev=2569.82 00:10:11.313 lat (usec): min=1737, max=29284, avg=8935.65, stdev=2587.53 00:10:11.313 clat percentiles (usec): 00:10:11.313 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 7242], 20.00th=[ 8029], 00:10:11.313 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:10:11.314 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[11338], 95.00th=[13042], 00:10:11.314 | 99.00th=[20317], 99.50th=[21890], 99.90th=[29230], 99.95th=[29230], 00:10:11.314 | 99.99th=[29230] 00:10:11.314 bw ( KiB/s): min=26008, max=30264, per=30.60%, avg=28136.00, stdev=3009.45, samples=2 00:10:11.314 iops : min= 6502, max= 7566, avg=7034.00, stdev=752.36, samples=2 00:10:11.314 lat (usec) : 750=0.01% 00:10:11.314 lat (msec) : 2=0.11%, 4=0.47%, 10=81.30%, 20=17.07%, 50=0.79% 00:10:11.314 lat (msec) : 100=0.25% 00:10:11.314 cpu : usr=5.99%, sys=6.69%, ctx=591, majf=0, minf=2 00:10:11.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:11.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.314 issued rwts: total=6656,7162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.314 00:10:11.314 Run status group 0 (all jobs): 00:10:11.314 READ: bw=84.6MiB/s (88.7MB/s), 14.3MiB/s-25.9MiB/s (15.0MB/s-27.2MB/s), io=88.4MiB (92.7MB), run=1003-1045msec 00:10:11.314 WRITE: bw=89.8MiB/s (94.1MB/s), 15.3MiB/s-27.9MiB/s (16.1MB/s-29.2MB/s), io=93.8MiB (98.4MB), run=1003-1045msec 00:10:11.314 00:10:11.314 Disk stats (read/write): 00:10:11.314 nvme0n1: ios=4652/4903, merge=0/0, ticks=43019/57924, in_queue=100943, util=97.49% 00:10:11.314 nvme0n2: ios=3106/3215, merge=0/0, ticks=19509/33889, in_queue=53398, util=86.81% 00:10:11.314 nvme0n3: ios=5679/5756, merge=0/0, ticks=44350/34697, in_queue=79047, util=95.65% 00:10:11.314 nvme0n4: ios=5632/5695, merge=0/0, ticks=28347/24873, in_queue=53220, util=89.34% 00:10:11.314 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:11.314 [global] 00:10:11.314 thread=1 00:10:11.314 invalidate=1 00:10:11.314 rw=randwrite 00:10:11.314 time_based=1 00:10:11.314 runtime=1 00:10:11.314 ioengine=libaio 00:10:11.314 direct=1 00:10:11.314 bs=4096 00:10:11.314 iodepth=128 00:10:11.314 norandommap=0 00:10:11.314 numjobs=1 00:10:11.314 00:10:11.314 verify_dump=1 00:10:11.314 verify_backlog=512 00:10:11.314 verify_state_save=0 00:10:11.314 do_verify=1 00:10:11.314 verify=crc32c-intel 00:10:11.314 [job0] 00:10:11.314 filename=/dev/nvme0n1 00:10:11.314 [job1] 00:10:11.314 filename=/dev/nvme0n2 00:10:11.314 [job2] 00:10:11.314 filename=/dev/nvme0n3 00:10:11.314 [job3] 00:10:11.314 filename=/dev/nvme0n4 00:10:11.314 Could not set queue depth (nvme0n1) 00:10:11.314 Could not set queue depth (nvme0n2) 00:10:11.314 Could not set queue depth (nvme0n3) 00:10:11.314 Could not set queue depth (nvme0n4) 00:10:11.574 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.574 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.574 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.574 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.574 fio-3.35 00:10:11.574 Starting 4 threads 00:10:12.957 00:10:12.957 job0: (groupid=0, jobs=1): err= 0: pid=253197: Fri Nov 15 10:50:32 2024 00:10:12.958 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:12.958 slat (nsec): min=941, max=18514k, avg=86235.56, stdev=660842.06 00:10:12.958 clat (usec): min=4117, max=35516, avg=10646.37, stdev=4581.13 00:10:12.958 lat (usec): min=4122, max=44751, avg=10732.60, stdev=4644.18 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 5211], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 7832], 00:10:12.958 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9765], 00:10:12.958 | 70.00th=[10945], 80.00th=[12387], 90.00th=[16188], 95.00th=[21627], 00:10:12.958 | 99.00th=[27132], 99.50th=[28705], 99.90th=[32900], 99.95th=[32900], 00:10:12.958 | 99.99th=[35390] 00:10:12.958 write: IOPS=5710, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1006msec); 0 zone resets 00:10:12.958 slat (nsec): min=1577, max=13643k, avg=81939.40, stdev=515354.44 00:10:12.958 clat (usec): min=1061, max=52641, avg=11745.22, stdev=7761.05 00:10:12.958 lat (usec): min=1074, max=52649, avg=11827.16, stdev=7813.60 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 3359], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 6521], 00:10:12.958 | 30.00th=[ 6980], 40.00th=[ 7832], 50.00th=[ 8979], 60.00th=[11600], 00:10:12.958 | 70.00th=[13435], 80.00th=[15008], 90.00th=[21890], 95.00th=[27657], 00:10:12.958 | 99.00th=[45351], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:10:12.958 | 99.99th=[52691] 00:10:12.958 bw ( KiB/s): min=21760, max=23344, per=23.99%, avg=22552.00, stdev=1120.06, samples=2 00:10:12.958 iops : min= 5440, max= 5836, avg=5638.00, stdev=280.01, samples=2 00:10:12.958 lat (msec) : 2=0.03%, 4=1.69%, 10=56.90%, 20=31.61%, 50=9.52% 00:10:12.958 lat (msec) : 100=0.25% 00:10:12.958 cpu : usr=3.28%, sys=7.76%, ctx=430, majf=0, minf=1 00:10:12.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:12.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.958 issued rwts: total=5632,5745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.958 job1: (groupid=0, jobs=1): err= 0: pid=253213: Fri Nov 15 10:50:32 2024 00:10:12.958 read: IOPS=6562, BW=25.6MiB/s (26.9MB/s)(25.8MiB/1007msec) 00:10:12.958 slat (nsec): min=875, max=11968k, avg=77855.70, stdev=531511.65 00:10:12.958 clat (usec): min=1094, max=33315, avg=10090.23, stdev=5177.19 00:10:12.958 lat (usec): min=2563, max=33343, avg=10168.09, stdev=5225.01 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 3818], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6718], 00:10:12.958 | 30.00th=[ 7111], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9634], 00:10:12.958 | 70.00th=[10421], 80.00th=[11338], 90.00th=[17171], 95.00th=[23200], 00:10:12.958 | 99.00th=[29230], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:10:12.958 | 99.99th=[33424] 00:10:12.958 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:10:12.958 slat (nsec): min=1484, max=7204.8k, avg=64209.96, stdev=433089.11 00:10:12.958 clat (usec): min=1171, max=61295, avg=9120.61, stdev=8508.49 00:10:12.958 lat (usec): min=1180, max=61301, avg=9184.82, stdev=8567.05 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 2409], 5.00th=[ 3916], 10.00th=[ 4293], 20.00th=[ 5080], 00:10:12.958 | 30.00th=[ 6194], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7242], 00:10:12.958 | 70.00th=[ 8586], 80.00th=[ 9765], 90.00th=[11994], 95.00th=[25822], 00:10:12.958 | 99.00th=[55837], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:10:12.958 | 99.99th=[61080] 00:10:12.958 bw ( KiB/s): min=16664, max=36584, per=28.32%, avg=26624.00, stdev=14085.57, samples=2 00:10:12.958 iops : min= 4166, max= 9146, avg=6656.00, stdev=3521.39, samples=2 00:10:12.958 lat (msec) : 2=0.30%, 4=3.35%, 10=68.60%, 20=21.08%, 50=5.78% 00:10:12.958 lat (msec) : 100=0.89% 00:10:12.958 cpu : usr=4.87%, sys=6.76%, ctx=512, majf=0, minf=1 00:10:12.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:12.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.958 issued rwts: total=6608,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.958 job2: (groupid=0, jobs=1): err= 0: pid=253232: Fri Nov 15 10:50:32 2024 00:10:12.958 read: IOPS=5379, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1006msec) 00:10:12.958 slat (nsec): min=951, max=20606k, avg=99029.15, stdev=771361.58 00:10:12.958 clat (usec): min=2248, max=53345, avg=12871.33, stdev=8312.18 00:10:12.958 lat (usec): min=2934, max=53371, avg=12970.36, stdev=8391.87 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 5997], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7898], 00:10:12.958 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10683], 00:10:12.958 | 70.00th=[11600], 80.00th=[13960], 90.00th=[26346], 95.00th=[34341], 00:10:12.958 | 99.00th=[38011], 99.50th=[41681], 99.90th=[44827], 99.95th=[48497], 00:10:12.958 | 99.99th=[53216] 00:10:12.958 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:12.958 slat (nsec): min=1585, max=13125k, avg=74698.23, stdev=535341.01 00:10:12.958 clat (usec): min=792, max=46762, avg=10279.86, stdev=6545.95 00:10:12.958 lat (usec): min=1158, max=46769, avg=10354.56, stdev=6587.45 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 2999], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5932], 00:10:12.958 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8586], 00:10:12.958 | 70.00th=[10159], 80.00th=[14615], 90.00th=[17957], 95.00th=[22152], 00:10:12.958 | 99.00th=[40109], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:10:12.958 | 99.99th=[46924] 00:10:12.958 bw ( KiB/s): min=16384, max=28672, per=23.97%, avg=22528.00, stdev=8688.93, samples=2 00:10:12.958 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:10:12.958 lat (usec) : 1000=0.01% 00:10:12.958 lat (msec) : 2=0.34%, 4=1.37%, 10=59.99%, 20=26.12%, 50=12.17% 00:10:12.958 lat (msec) : 100=0.01% 00:10:12.958 cpu : usr=4.88%, sys=6.17%, ctx=395, majf=0, minf=2 00:10:12.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:12.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.958 issued rwts: total=5412,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.958 job3: (groupid=0, jobs=1): err= 0: pid=253239: Fri Nov 15 10:50:32 2024 00:10:12.958 read: IOPS=5591, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:12.958 slat (nsec): min=930, max=11078k, avg=98809.29, stdev=629641.23 00:10:12.958 clat (usec): min=1337, max=45028, avg=12484.37, stdev=6346.73 00:10:12.958 lat (usec): min=6670, max=45055, avg=12583.17, stdev=6413.38 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:10:12.958 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:10:12.958 | 70.00th=[11076], 80.00th=[14877], 90.00th=[18482], 95.00th=[32637], 00:10:12.958 | 99.00th=[35914], 99.50th=[36963], 99.90th=[41681], 99.95th=[43254], 00:10:12.958 | 99.99th=[44827] 00:10:12.958 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:12.958 slat (nsec): min=1508, max=7260.6k, avg=74559.29, stdev=433515.41 00:10:12.958 clat (usec): min=4779, max=34089, avg=10084.06, stdev=2517.09 00:10:12.958 lat (usec): min=4786, max=34096, avg=10158.62, stdev=2548.54 00:10:12.958 clat percentiles (usec): 00:10:12.958 | 1.00th=[ 6849], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8848], 00:10:12.958 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:10:12.958 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[12256], 95.00th=[14746], 00:10:12.958 | 99.00th=[22152], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:10:12.958 | 99.99th=[34341] 00:10:12.958 bw ( KiB/s): min=20488, max=24568, per=23.97%, avg=22528.00, stdev=2885.00, samples=2 00:10:12.958 iops : min= 5122, max= 6142, avg=5632.00, stdev=721.25, samples=2 00:10:12.958 lat (msec) : 2=0.01%, 10=64.94%, 20=30.14%, 50=4.91% 00:10:12.958 cpu : usr=3.38%, sys=5.87%, ctx=436, majf=0, minf=2 00:10:12.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:12.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.958 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.958 00:10:12.958 Run status group 0 (all jobs): 00:10:12.958 READ: bw=90.3MiB/s (94.7MB/s), 21.0MiB/s-25.6MiB/s (22.0MB/s-26.9MB/s), io=90.9MiB (95.3MB), run=1006-1007msec 00:10:12.958 WRITE: bw=91.8MiB/s (96.3MB/s), 21.9MiB/s-25.8MiB/s (22.9MB/s-27.1MB/s), io=92.4MiB (96.9MB), run=1006-1007msec 00:10:12.958 00:10:12.958 Disk stats (read/write): 00:10:12.958 nvme0n1: ios=4271/4608, merge=0/0, ticks=45837/55384, in_queue=101221, util=86.57% 00:10:12.958 nvme0n2: ios=5988/6144, merge=0/0, ticks=37753/37753, in_queue=75506, util=88.06% 00:10:12.958 nvme0n3: ios=4096/4399, merge=0/0, ticks=39920/38085, in_queue=78005, util=88.38% 00:10:12.958 nvme0n4: ios=5125/5120, merge=0/0, ticks=18879/16117, in_queue=34996, util=92.41% 00:10:12.958 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:12.958 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=253481 00:10:12.958 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:12.958 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:12.958 [global] 00:10:12.958 thread=1 00:10:12.958 invalidate=1 00:10:12.958 rw=read 00:10:12.958 time_based=1 00:10:12.958 runtime=10 00:10:12.958 ioengine=libaio 00:10:12.958 direct=1 00:10:12.958 bs=4096 00:10:12.958 iodepth=1 00:10:12.958 norandommap=1 00:10:12.958 numjobs=1 00:10:12.958 00:10:12.958 [job0] 00:10:12.958 filename=/dev/nvme0n1 00:10:12.958 [job1] 00:10:12.958 filename=/dev/nvme0n2 00:10:12.958 [job2] 00:10:12.958 filename=/dev/nvme0n3 00:10:12.958 [job3] 00:10:12.958 filename=/dev/nvme0n4 00:10:12.958 Could not set queue depth (nvme0n1) 00:10:12.959 Could not set queue depth (nvme0n2) 00:10:12.959 Could not set queue depth (nvme0n3) 00:10:12.959 Could not set queue depth (nvme0n4) 00:10:13.219 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.219 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.219 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.219 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.219 fio-3.35 00:10:13.219 Starting 4 threads 00:10:15.764 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:16.025 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10244096, buflen=4096 00:10:16.025 fio: pid=253755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.025 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:16.285 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.285 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:16.285 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4714496, buflen=4096 00:10:16.285 fio: pid=253742, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.285 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=286720, buflen=4096 00:10:16.285 fio: pid=253695, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.285 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.285 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:16.546 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5337088, buflen=4096 00:10:16.546 fio: pid=253707, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.546 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.546 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:16.546 00:10:16.546 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=253695: Fri Nov 15 10:50:36 2024 00:10:16.546 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(280KiB/2904msec) 00:10:16.546 slat (usec): min=16, max=8673, avg=151.61, stdev=1026.07 00:10:16.546 clat (usec): min=861, max=42109, avg=41004.27, stdev=4889.93 00:10:16.546 lat (usec): min=904, max=49963, avg=41157.76, stdev=5003.69 00:10:16.546 clat percentiles (usec): 00:10:16.546 | 1.00th=[ 865], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:16.546 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:16.546 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:16.546 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:16.546 | 99.99th=[42206] 00:10:16.546 bw ( KiB/s): min= 96, max= 104, per=1.49%, avg=97.60, stdev= 3.58, samples=5 00:10:16.546 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:16.546 lat (usec) : 1000=1.41% 00:10:16.546 lat (msec) : 50=97.18% 00:10:16.546 cpu : usr=0.00%, sys=0.14%, ctx=73, majf=0, minf=1 00:10:16.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.546 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.546 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.546 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=253707: Fri Nov 15 10:50:36 2024 00:10:16.546 read: IOPS=423, BW=1692KiB/s (1733kB/s)(5212KiB/3080msec) 00:10:16.546 slat (usec): min=7, max=22531, avg=76.41, stdev=948.54 00:10:16.546 clat (usec): min=322, max=42088, avg=2263.82, stdev=7077.04 00:10:16.546 lat (usec): min=348, max=42112, avg=2340.26, stdev=7130.88 00:10:16.546 clat percentiles (usec): 00:10:16.546 | 1.00th=[ 611], 5.00th=[ 783], 10.00th=[ 857], 20.00th=[ 930], 00:10:16.546 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:10:16.546 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1106], 95.00th=[ 1188], 00:10:16.546 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:16.546 | 99.99th=[42206] 00:10:16.546 bw ( KiB/s): min= 104, max= 3904, per=25.59%, avg=1670.33, stdev=1664.23, samples=6 00:10:16.546 iops : min= 26, max= 976, avg=417.50, stdev=415.96, samples=6 00:10:16.546 lat (usec) : 500=0.31%, 750=3.30%, 1000=51.38% 00:10:16.546 lat (msec) : 2=41.64%, 4=0.08%, 20=0.08%, 50=3.14% 00:10:16.546 cpu : usr=0.39%, sys=1.30%, ctx=1309, majf=0, minf=2 00:10:16.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.546 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.546 issued rwts: total=1304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.546 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=253742: Fri Nov 15 10:50:36 2024 00:10:16.546 read: IOPS=422, BW=1687KiB/s (1728kB/s)(4604KiB/2729msec) 00:10:16.546 slat (usec): min=9, max=18522, avg=54.56, stdev=699.15 00:10:16.546 clat (usec): min=613, max=42082, avg=2290.56, stdev=7111.95 00:10:16.546 lat (usec): min=639, max=42108, avg=2345.15, stdev=7141.47 00:10:16.546 clat percentiles (usec): 00:10:16.546 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 881], 20.00th=[ 930], 00:10:16.546 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1029], 00:10:16.546 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1172], 00:10:16.546 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:16.546 | 99.99th=[42206] 00:10:16.546 bw ( KiB/s): min= 96, max= 3912, per=25.10%, avg=1638.40, stdev=1998.63, samples=5 00:10:16.546 iops : min= 24, max= 978, avg=409.60, stdev=499.66, samples=5 00:10:16.546 lat (usec) : 750=1.30%, 1000=44.79% 00:10:16.546 lat (msec) : 2=50.61%, 50=3.21% 00:10:16.546 cpu : usr=0.11%, sys=1.61%, ctx=1154, majf=0, minf=2 00:10:16.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.547 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.547 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.547 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=253755: Fri Nov 15 10:50:36 2024 00:10:16.547 read: IOPS=987, BW=3949KiB/s (4044kB/s)(9.77MiB/2533msec) 00:10:16.547 slat (nsec): min=7932, max=88753, avg=25202.62, stdev=2391.09 00:10:16.547 clat (usec): min=567, max=1440, avg=970.40, stdev=80.56 00:10:16.547 lat (usec): min=593, max=1469, avg=995.61, stdev=80.47 00:10:16.547 clat percentiles (usec): 00:10:16.547 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 930], 00:10:16.547 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 988], 00:10:16.547 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:10:16.547 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1369], 99.95th=[ 1385], 00:10:16.547 | 99.99th=[ 1434] 00:10:16.547 bw ( KiB/s): min= 3928, max= 4048, per=61.26%, avg=3998.40, stdev=43.60, samples=5 00:10:16.547 iops : min= 982, max= 1012, avg=999.60, stdev=10.90, samples=5 00:10:16.547 lat (usec) : 750=1.24%, 1000=65.27% 00:10:16.547 lat (msec) : 2=33.45% 00:10:16.547 cpu : usr=0.83%, sys=3.16%, ctx=2503, majf=0, minf=2 00:10:16.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.547 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.547 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.547 00:10:16.547 Run status group 0 (all jobs): 00:10:16.547 READ: bw=6526KiB/s (6683kB/s), 96.4KiB/s-3949KiB/s (98.7kB/s-4044kB/s), io=19.6MiB (20.6MB), run=2533-3080msec 00:10:16.547 00:10:16.547 Disk stats (read/write): 00:10:16.547 nvme0n1: ios=67/0, merge=0/0, ticks=2748/0, in_queue=2748, util=92.49% 00:10:16.547 nvme0n2: ios=1273/0, merge=0/0, ticks=2919/0, in_queue=2919, util=92.31% 00:10:16.547 nvme0n3: ios=1049/0, merge=0/0, ticks=2491/0, in_queue=2491, util=95.50% 00:10:16.547 nvme0n4: ios=2502/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.04% 00:10:16.807 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.807 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:17.067 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.067 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:17.067 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.067 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:17.327 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.327 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 253481 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:17.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.588 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:17.588 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.588 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:17.588 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:17.588 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:17.588 nvmf hotplug test: fio failed as expected 00:10:17.588 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.848 rmmod nvme_tcp 00:10:17.848 rmmod nvme_fabrics 00:10:17.848 rmmod nvme_keyring 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 249950 ']' 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 249950 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 249950 ']' 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 249950 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 249950 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 249950' 00:10:17.848 killing process with pid 249950 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 249950 00:10:17.848 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 249950 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.109 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.019 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.280 00:10:20.280 real 0m29.494s 00:10:20.280 user 2m41.205s 00:10:20.280 sys 0m9.444s 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.280 ************************************ 00:10:20.280 END TEST nvmf_fio_target 00:10:20.280 ************************************ 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.280 ************************************ 00:10:20.280 START TEST nvmf_bdevio 00:10:20.280 ************************************ 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:20.280 * Looking for test storage... 00:10:20.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.280 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.541 --rc genhtml_branch_coverage=1 00:10:20.541 --rc genhtml_function_coverage=1 00:10:20.541 --rc genhtml_legend=1 00:10:20.541 --rc geninfo_all_blocks=1 00:10:20.541 --rc geninfo_unexecuted_blocks=1 00:10:20.541 00:10:20.541 ' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.541 --rc genhtml_branch_coverage=1 00:10:20.541 --rc genhtml_function_coverage=1 00:10:20.541 --rc genhtml_legend=1 00:10:20.541 --rc geninfo_all_blocks=1 00:10:20.541 --rc geninfo_unexecuted_blocks=1 00:10:20.541 00:10:20.541 ' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.541 --rc genhtml_branch_coverage=1 00:10:20.541 --rc genhtml_function_coverage=1 00:10:20.541 --rc genhtml_legend=1 00:10:20.541 --rc geninfo_all_blocks=1 00:10:20.541 --rc geninfo_unexecuted_blocks=1 00:10:20.541 00:10:20.541 ' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.541 --rc genhtml_branch_coverage=1 00:10:20.541 --rc genhtml_function_coverage=1 00:10:20.541 --rc genhtml_legend=1 00:10:20.541 --rc geninfo_all_blocks=1 00:10:20.541 --rc geninfo_unexecuted_blocks=1 00:10:20.541 00:10:20.541 ' 00:10:20.541 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.542 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.677 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:28.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:28.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:28.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:28.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:28.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:10:28.678 00:10:28.678 --- 10.0.0.2 ping statistics --- 00:10:28.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.678 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:10:28.678 00:10:28.678 --- 10.0.0.1 ping statistics --- 00:10:28.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.678 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.678 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=259036 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 259036 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 259036 ']' 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.679 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.679 [2024-11-15 10:50:47.449189] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:10:28.679 [2024-11-15 10:50:47.449254] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.679 [2024-11-15 10:50:47.548116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.679 [2024-11-15 10:50:47.599916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.679 [2024-11-15 10:50:47.599965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.679 [2024-11-15 10:50:47.599974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.679 [2024-11-15 10:50:47.599981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.679 [2024-11-15 10:50:47.599988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.679 [2024-11-15 10:50:47.602064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:28.679 [2024-11-15 10:50:47.602227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:28.679 [2024-11-15 10:50:47.602386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:28.679 [2024-11-15 10:50:47.602387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.939 [2024-11-15 10:50:48.330986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.939 Malloc0 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.939 [2024-11-15 10:50:48.404390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:28.939 { 00:10:28.939 "params": { 00:10:28.939 "name": "Nvme$subsystem", 00:10:28.939 "trtype": "$TEST_TRANSPORT", 00:10:28.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.939 "adrfam": "ipv4", 00:10:28.939 "trsvcid": "$NVMF_PORT", 00:10:28.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.939 "hdgst": ${hdgst:-false}, 00:10:28.939 "ddgst": ${ddgst:-false} 00:10:28.939 }, 00:10:28.939 "method": "bdev_nvme_attach_controller" 00:10:28.939 } 00:10:28.939 EOF 00:10:28.939 )") 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:28.939 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:28.939 "params": { 00:10:28.939 "name": "Nvme1", 00:10:28.939 "trtype": "tcp", 00:10:28.939 "traddr": "10.0.0.2", 00:10:28.939 "adrfam": "ipv4", 00:10:28.939 "trsvcid": "4420", 00:10:28.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.940 "hdgst": false, 00:10:28.940 "ddgst": false 00:10:28.940 }, 00:10:28.940 "method": "bdev_nvme_attach_controller" 00:10:28.940 }' 00:10:28.940 [2024-11-15 10:50:48.462321] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:10:28.940 [2024-11-15 10:50:48.462388] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259092 ] 00:10:29.200 [2024-11-15 10:50:48.555621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.200 [2024-11-15 10:50:48.612702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.200 [2024-11-15 10:50:48.612835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.200 [2024-11-15 10:50:48.612835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.461 I/O targets: 00:10:29.461 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:29.461 00:10:29.461 00:10:29.461 CUnit - A unit testing framework for C - Version 2.1-3 00:10:29.461 http://cunit.sourceforge.net/ 00:10:29.461 00:10:29.461 00:10:29.461 Suite: bdevio tests on: Nvme1n1 00:10:29.461 Test: blockdev write read block ...passed 00:10:29.461 Test: blockdev write zeroes read block ...passed 00:10:29.461 Test: blockdev write zeroes read no split ...passed 00:10:29.721 Test: blockdev write zeroes read split ...passed 00:10:29.721 Test: blockdev write zeroes read split partial ...passed 00:10:29.721 Test: blockdev reset ...[2024-11-15 10:50:49.081499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:29.721 [2024-11-15 10:50:49.081603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575970 (9): Bad file descriptor 00:10:29.721 [2024-11-15 10:50:49.102711] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:29.721 passed 00:10:29.721 Test: blockdev write read 8 blocks ...passed 00:10:29.721 Test: blockdev write read size > 128k ...passed 00:10:29.721 Test: blockdev write read invalid size ...passed 00:10:29.721 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:29.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:29.721 Test: blockdev write read max offset ...passed 00:10:29.721 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:29.981 Test: blockdev writev readv 8 blocks ...passed 00:10:29.981 Test: blockdev writev readv 30 x 1block ...passed 00:10:29.981 Test: blockdev writev readv block ...passed 00:10:29.981 Test: blockdev writev readv size > 128k ...passed 00:10:29.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:29.981 Test: blockdev comparev and writev ...[2024-11-15 10:50:49.407251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.407301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.407318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.407327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.407777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.407792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.407806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.407816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.408216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.408231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.408245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.408253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.408662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.408677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.408693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.981 [2024-11-15 10:50:49.408705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:29.981 passed 00:10:29.981 Test: blockdev nvme passthru rw ...passed 00:10:29.981 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:49.493053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.981 [2024-11-15 10:50:49.493071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.493294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.981 [2024-11-15 10:50:49.493307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.493528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.981 [2024-11-15 10:50:49.493539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:29.981 [2024-11-15 10:50:49.493788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.981 [2024-11-15 10:50:49.493801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:29.981 passed 00:10:30.242 Test: blockdev nvme admin passthru ...passed 00:10:30.242 Test: blockdev copy ...passed 00:10:30.242 00:10:30.242 Run Summary: Type Total Ran Passed Failed Inactive 00:10:30.242 suites 1 1 n/a 0 0 00:10:30.242 tests 23 23 23 0 0 00:10:30.242 asserts 152 152 152 0 n/a 00:10:30.242 00:10:30.242 Elapsed time = 1.385 seconds 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.242 rmmod nvme_tcp 00:10:30.242 rmmod nvme_fabrics 00:10:30.242 rmmod nvme_keyring 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 259036 ']' 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 259036 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 259036 ']' 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 259036 00:10:30.242 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 259036 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 259036' 00:10:30.503 killing process with pid 259036 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 259036 00:10:30.503 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 259036 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.503 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.045 00:10:33.045 real 0m12.464s 00:10:33.045 user 0m14.226s 00:10:33.045 sys 0m6.373s 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 ************************************ 00:10:33.045 END TEST nvmf_bdevio 00:10:33.045 ************************************ 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:33.045 00:10:33.045 real 5m3.842s 00:10:33.045 user 11m53.193s 00:10:33.045 sys 1m51.659s 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 ************************************ 00:10:33.045 END TEST nvmf_target_core 00:10:33.045 ************************************ 00:10:33.045 10:50:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:33.045 10:50:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:33.045 10:50:52 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.045 10:50:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 ************************************ 00:10:33.045 START TEST nvmf_target_extra 00:10:33.045 ************************************ 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:33.045 * Looking for test storage... 00:10:33.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.045 --rc genhtml_branch_coverage=1 00:10:33.045 --rc genhtml_function_coverage=1 00:10:33.045 --rc genhtml_legend=1 00:10:33.045 --rc geninfo_all_blocks=1 00:10:33.045 --rc geninfo_unexecuted_blocks=1 00:10:33.045 00:10:33.045 ' 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.045 --rc genhtml_branch_coverage=1 00:10:33.045 --rc genhtml_function_coverage=1 00:10:33.045 --rc genhtml_legend=1 00:10:33.045 --rc geninfo_all_blocks=1 00:10:33.045 --rc geninfo_unexecuted_blocks=1 00:10:33.045 00:10:33.045 ' 00:10:33.045 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.045 --rc genhtml_branch_coverage=1 00:10:33.045 --rc genhtml_function_coverage=1 00:10:33.045 --rc genhtml_legend=1 00:10:33.045 --rc geninfo_all_blocks=1 00:10:33.045 --rc geninfo_unexecuted_blocks=1 00:10:33.045 00:10:33.045 ' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.046 --rc genhtml_branch_coverage=1 00:10:33.046 --rc genhtml_function_coverage=1 00:10:33.046 --rc genhtml_legend=1 00:10:33.046 --rc geninfo_all_blocks=1 00:10:33.046 --rc geninfo_unexecuted_blocks=1 00:10:33.046 00:10:33.046 ' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.046 ************************************ 00:10:33.046 START TEST nvmf_example 00:10:33.046 ************************************ 00:10:33.046 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:33.308 * Looking for test storage... 00:10:33.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.308 --rc genhtml_branch_coverage=1 00:10:33.308 --rc genhtml_function_coverage=1 00:10:33.308 --rc genhtml_legend=1 00:10:33.308 --rc geninfo_all_blocks=1 00:10:33.308 --rc geninfo_unexecuted_blocks=1 00:10:33.308 00:10:33.308 ' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.308 --rc genhtml_branch_coverage=1 00:10:33.308 --rc genhtml_function_coverage=1 00:10:33.308 --rc genhtml_legend=1 00:10:33.308 --rc geninfo_all_blocks=1 00:10:33.308 --rc geninfo_unexecuted_blocks=1 00:10:33.308 00:10:33.308 ' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.308 --rc genhtml_branch_coverage=1 00:10:33.308 --rc genhtml_function_coverage=1 00:10:33.308 --rc genhtml_legend=1 00:10:33.308 --rc geninfo_all_blocks=1 00:10:33.308 --rc geninfo_unexecuted_blocks=1 00:10:33.308 00:10:33.308 ' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.308 --rc genhtml_branch_coverage=1 00:10:33.308 --rc genhtml_function_coverage=1 00:10:33.308 --rc genhtml_legend=1 00:10:33.308 --rc geninfo_all_blocks=1 00:10:33.308 --rc geninfo_unexecuted_blocks=1 00:10:33.308 00:10:33.308 ' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.308 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.309 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:41.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:41.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:41.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:41.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.449 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.450 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.450 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.450 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.450 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.450 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.450 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:10:41.450 00:10:41.450 --- 10.0.0.2 ping statistics --- 00:10:41.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.450 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:10:41.450 00:10:41.450 --- 10.0.0.1 ping statistics --- 00:10:41.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.450 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=263822 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 263822 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 263822 ']' 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.450 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.712 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.973 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.974 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.974 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.974 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:41.974 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:54.243 Initializing NVMe Controllers 00:10:54.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.243 Initialization complete. Launching workers. 00:10:54.243 ======================================================== 00:10:54.243 Latency(us) 00:10:54.243 Device Information : IOPS MiB/s Average min max 00:10:54.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18832.50 73.56 3398.07 631.65 15517.72 00:10:54.243 ======================================================== 00:10:54.243 Total : 18832.50 73.56 3398.07 631.65 15517.72 00:10:54.243 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.243 rmmod nvme_tcp 00:10:54.243 rmmod nvme_fabrics 00:10:54.243 rmmod nvme_keyring 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 263822 ']' 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 263822 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 263822 ']' 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 263822 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 263822 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 263822' 00:10:54.243 killing process with pid 263822 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 263822 00:10:54.243 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 263822 00:10:54.243 nvmf threads initialize successfully 00:10:54.243 bdev subsystem init successfully 00:10:54.243 created a nvmf target service 00:10:54.243 create targets's poll groups done 00:10:54.243 all subsystems of target started 00:10:54.243 nvmf target is running 00:10:54.243 all subsystems of target stopped 00:10:54.243 destroy targets's poll groups done 00:10:54.243 destroyed the nvmf target service 00:10:54.244 bdev subsystem finish successfully 00:10:54.244 nvmf threads destroy successfully 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.244 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.503 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.503 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:54.503 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.503 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.503 00:10:54.503 real 0m21.495s 00:10:54.503 user 0m46.960s 00:10:54.503 sys 0m6.935s 00:10:54.503 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:54.503 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.503 ************************************ 00:10:54.503 END TEST nvmf_example 00:10:54.503 ************************************ 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.765 ************************************ 00:10:54.765 START TEST nvmf_filesystem 00:10:54.765 ************************************ 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:54.765 * Looking for test storage... 00:10:54.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:54.765 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.766 --rc genhtml_branch_coverage=1 00:10:54.766 --rc genhtml_function_coverage=1 00:10:54.766 --rc genhtml_legend=1 00:10:54.766 --rc geninfo_all_blocks=1 00:10:54.766 --rc geninfo_unexecuted_blocks=1 00:10:54.766 00:10:54.766 ' 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.766 --rc genhtml_branch_coverage=1 00:10:54.766 --rc genhtml_function_coverage=1 00:10:54.766 --rc genhtml_legend=1 00:10:54.766 --rc geninfo_all_blocks=1 00:10:54.766 --rc geninfo_unexecuted_blocks=1 00:10:54.766 00:10:54.766 ' 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.766 --rc genhtml_branch_coverage=1 00:10:54.766 --rc genhtml_function_coverage=1 00:10:54.766 --rc genhtml_legend=1 00:10:54.766 --rc geninfo_all_blocks=1 00:10:54.766 --rc geninfo_unexecuted_blocks=1 00:10:54.766 00:10:54.766 ' 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.766 --rc genhtml_branch_coverage=1 00:10:54.766 --rc genhtml_function_coverage=1 00:10:54.766 --rc genhtml_legend=1 00:10:54.766 --rc geninfo_all_blocks=1 00:10:54.766 --rc geninfo_unexecuted_blocks=1 00:10:54.766 00:10:54.766 ' 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:54.766 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:55.031 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:55.031 #define SPDK_CONFIG_H 00:10:55.031 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:55.031 #define SPDK_CONFIG_APPS 1 00:10:55.031 #define SPDK_CONFIG_ARCH native 00:10:55.031 #undef SPDK_CONFIG_ASAN 00:10:55.031 #undef SPDK_CONFIG_AVAHI 00:10:55.031 #undef SPDK_CONFIG_CET 00:10:55.031 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:55.031 #define SPDK_CONFIG_COVERAGE 1 00:10:55.031 #define SPDK_CONFIG_CROSS_PREFIX 00:10:55.031 #undef SPDK_CONFIG_CRYPTO 00:10:55.031 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:55.031 #undef SPDK_CONFIG_CUSTOMOCF 00:10:55.031 #undef SPDK_CONFIG_DAOS 00:10:55.031 #define SPDK_CONFIG_DAOS_DIR 00:10:55.031 #define SPDK_CONFIG_DEBUG 1 00:10:55.031 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:55.031 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:55.031 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:55.031 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:55.031 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:55.031 #undef SPDK_CONFIG_DPDK_UADK 00:10:55.031 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:55.031 #define SPDK_CONFIG_EXAMPLES 1 00:10:55.031 #undef SPDK_CONFIG_FC 00:10:55.032 #define SPDK_CONFIG_FC_PATH 00:10:55.032 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:55.032 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:55.032 #define SPDK_CONFIG_FSDEV 1 00:10:55.032 #undef SPDK_CONFIG_FUSE 00:10:55.032 #undef SPDK_CONFIG_FUZZER 00:10:55.032 #define SPDK_CONFIG_FUZZER_LIB 00:10:55.032 #undef SPDK_CONFIG_GOLANG 00:10:55.032 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:55.032 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:55.032 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:55.032 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:55.032 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:55.032 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:55.032 #undef SPDK_CONFIG_HAVE_LZ4 00:10:55.032 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:55.032 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:55.032 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:55.032 #define SPDK_CONFIG_IDXD 1 00:10:55.032 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:55.032 #undef SPDK_CONFIG_IPSEC_MB 00:10:55.032 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:55.032 #define SPDK_CONFIG_ISAL 1 00:10:55.032 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:55.032 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:55.032 #define SPDK_CONFIG_LIBDIR 00:10:55.032 #undef SPDK_CONFIG_LTO 00:10:55.032 #define SPDK_CONFIG_MAX_LCORES 128 00:10:55.032 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:55.032 #define SPDK_CONFIG_NVME_CUSE 1 00:10:55.032 #undef SPDK_CONFIG_OCF 00:10:55.032 #define SPDK_CONFIG_OCF_PATH 00:10:55.032 #define SPDK_CONFIG_OPENSSL_PATH 00:10:55.032 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:55.032 #define SPDK_CONFIG_PGO_DIR 00:10:55.032 #undef SPDK_CONFIG_PGO_USE 00:10:55.032 #define SPDK_CONFIG_PREFIX /usr/local 00:10:55.032 #undef SPDK_CONFIG_RAID5F 00:10:55.032 #undef SPDK_CONFIG_RBD 00:10:55.032 #define SPDK_CONFIG_RDMA 1 00:10:55.032 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:55.032 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:55.032 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:55.032 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:55.032 #define SPDK_CONFIG_SHARED 1 00:10:55.032 #undef SPDK_CONFIG_SMA 00:10:55.032 #define SPDK_CONFIG_TESTS 1 00:10:55.032 #undef SPDK_CONFIG_TSAN 00:10:55.032 #define SPDK_CONFIG_UBLK 1 00:10:55.032 #define SPDK_CONFIG_UBSAN 1 00:10:55.032 #undef SPDK_CONFIG_UNIT_TESTS 00:10:55.032 #undef SPDK_CONFIG_URING 00:10:55.032 #define SPDK_CONFIG_URING_PATH 00:10:55.032 #undef SPDK_CONFIG_URING_ZNS 00:10:55.032 #undef SPDK_CONFIG_USDT 00:10:55.032 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:55.032 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:55.032 #define SPDK_CONFIG_VFIO_USER 1 00:10:55.032 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:55.032 #define SPDK_CONFIG_VHOST 1 00:10:55.032 #define SPDK_CONFIG_VIRTIO 1 00:10:55.032 #undef SPDK_CONFIG_VTUNE 00:10:55.032 #define SPDK_CONFIG_VTUNE_DIR 00:10:55.032 #define SPDK_CONFIG_WERROR 1 00:10:55.032 #define SPDK_CONFIG_WPDK_DIR 00:10:55.032 #undef SPDK_CONFIG_XNVME 00:10:55.032 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:55.032 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.033 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 267171 ]] 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 267171 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.uXA1YH 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.uXA1YH/tests/target /tmp/spdk.uXA1YH 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:55.034 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122525573120 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356550144 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6830977024 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668241920 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677494784 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=782336 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:55.035 * Looking for test storage... 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122525573120 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9045569536 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:55.035 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.036 --rc genhtml_branch_coverage=1 00:10:55.036 --rc genhtml_function_coverage=1 00:10:55.036 --rc genhtml_legend=1 00:10:55.036 --rc geninfo_all_blocks=1 00:10:55.036 --rc geninfo_unexecuted_blocks=1 00:10:55.036 00:10:55.036 ' 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.036 --rc genhtml_branch_coverage=1 00:10:55.036 --rc genhtml_function_coverage=1 00:10:55.036 --rc genhtml_legend=1 00:10:55.036 --rc geninfo_all_blocks=1 00:10:55.036 --rc geninfo_unexecuted_blocks=1 00:10:55.036 00:10:55.036 ' 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.036 --rc genhtml_branch_coverage=1 00:10:55.036 --rc genhtml_function_coverage=1 00:10:55.036 --rc genhtml_legend=1 00:10:55.036 --rc geninfo_all_blocks=1 00:10:55.036 --rc geninfo_unexecuted_blocks=1 00:10:55.036 00:10:55.036 ' 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.036 --rc genhtml_branch_coverage=1 00:10:55.036 --rc genhtml_function_coverage=1 00:10:55.036 --rc genhtml_legend=1 00:10:55.036 --rc geninfo_all_blocks=1 00:10:55.036 --rc geninfo_unexecuted_blocks=1 00:10:55.036 00:10:55.036 ' 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.036 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.298 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.299 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.443 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:03.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:03.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:03.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:03.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:11:03.444 00:11:03.444 --- 10.0.0.2 ping statistics --- 00:11:03.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.444 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:11:03.444 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:11:03.444 00:11:03.444 --- 10.0.0.1 ping statistics --- 00:11:03.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.444 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:03.444 ************************************ 00:11:03.444 START TEST nvmf_filesystem_no_in_capsule 00:11:03.444 ************************************ 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=271068 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 271068 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 271068 ']' 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.444 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.444 [2024-11-15 10:51:22.178095] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:11:03.444 [2024-11-15 10:51:22.178167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.444 [2024-11-15 10:51:22.277570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.444 [2024-11-15 10:51:22.331278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.444 [2024-11-15 10:51:22.331331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.444 [2024-11-15 10:51:22.331340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.444 [2024-11-15 10:51:22.331348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.444 [2024-11-15 10:51:22.331359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.444 [2024-11-15 10:51:22.333734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.444 [2024-11-15 10:51:22.333897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.444 [2024-11-15 10:51:22.334058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.444 [2024-11-15 10:51:22.334058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 [2024-11-15 10:51:23.055428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 [2024-11-15 10:51:23.214771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.706 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.967 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:03.968 { 00:11:03.968 "name": "Malloc1", 00:11:03.968 "aliases": [ 00:11:03.968 "84538951-1548-4e29-ab14-1767cc199a53" 00:11:03.968 ], 00:11:03.968 "product_name": "Malloc disk", 00:11:03.968 "block_size": 512, 00:11:03.968 "num_blocks": 1048576, 00:11:03.968 "uuid": "84538951-1548-4e29-ab14-1767cc199a53", 00:11:03.968 "assigned_rate_limits": { 00:11:03.968 "rw_ios_per_sec": 0, 00:11:03.968 "rw_mbytes_per_sec": 0, 00:11:03.968 "r_mbytes_per_sec": 0, 00:11:03.968 "w_mbytes_per_sec": 0 00:11:03.968 }, 00:11:03.968 "claimed": true, 00:11:03.968 "claim_type": "exclusive_write", 00:11:03.968 "zoned": false, 00:11:03.968 "supported_io_types": { 00:11:03.968 "read": true, 00:11:03.968 "write": true, 00:11:03.968 "unmap": true, 00:11:03.968 "flush": true, 00:11:03.968 "reset": true, 00:11:03.968 "nvme_admin": false, 00:11:03.968 "nvme_io": false, 00:11:03.968 "nvme_io_md": false, 00:11:03.968 "write_zeroes": true, 00:11:03.968 "zcopy": true, 00:11:03.968 "get_zone_info": false, 00:11:03.968 "zone_management": false, 00:11:03.968 "zone_append": false, 00:11:03.968 "compare": false, 00:11:03.968 "compare_and_write": false, 00:11:03.968 "abort": true, 00:11:03.968 "seek_hole": false, 00:11:03.968 "seek_data": false, 00:11:03.968 "copy": true, 00:11:03.968 "nvme_iov_md": false 00:11:03.968 }, 00:11:03.968 "memory_domains": [ 00:11:03.968 { 00:11:03.968 "dma_device_id": "system", 00:11:03.968 "dma_device_type": 1 00:11:03.968 }, 00:11:03.968 { 00:11:03.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.968 "dma_device_type": 2 00:11:03.968 } 00:11:03.968 ], 00:11:03.968 "driver_specific": {} 00:11:03.968 } 00:11:03.968 ]' 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:03.968 10:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.353 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.353 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:05.353 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.353 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:05.353 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:07.896 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:07.896 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:07.896 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.839 ************************************ 00:11:08.839 START TEST filesystem_ext4 00:11:08.839 ************************************ 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:08.839 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.839 mke2fs 1.47.0 (5-Feb-2023) 00:11:09.099 Discarding device blocks: 0/522240 done 00:11:09.099 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:09.099 Filesystem UUID: a4a6fc8e-7ecc-494a-9120-c443692d318f 00:11:09.099 Superblock backups stored on blocks: 00:11:09.099 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:09.099 00:11:09.100 Allocating group tables: 0/64 done 00:11:09.100 Writing inode tables: 0/64 done 00:11:09.100 Creating journal (8192 blocks): done 00:11:09.100 Writing superblocks and filesystem accounting information: 0/64 done 00:11:09.100 00:11:09.100 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:09.100 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 271068 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.680 00:11:15.680 real 0m6.038s 00:11:15.680 user 0m0.022s 00:11:15.680 sys 0m0.082s 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:15.680 ************************************ 00:11:15.680 END TEST filesystem_ext4 00:11:15.680 ************************************ 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.680 ************************************ 00:11:15.680 START TEST filesystem_btrfs 00:11:15.680 ************************************ 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:15.680 btrfs-progs v6.8.1 00:11:15.680 See https://btrfs.readthedocs.io for more information. 00:11:15.680 00:11:15.680 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:15.680 NOTE: several default settings have changed in version 5.15, please make sure 00:11:15.680 this does not affect your deployments: 00:11:15.680 - DUP for metadata (-m dup) 00:11:15.680 - enabled no-holes (-O no-holes) 00:11:15.680 - enabled free-space-tree (-R free-space-tree) 00:11:15.680 00:11:15.680 Label: (null) 00:11:15.680 UUID: 443ea35d-20b7-4536-8ef5-10e832bfb964 00:11:15.680 Node size: 16384 00:11:15.680 Sector size: 4096 (CPU page size: 4096) 00:11:15.680 Filesystem size: 510.00MiB 00:11:15.680 Block group profiles: 00:11:15.680 Data: single 8.00MiB 00:11:15.680 Metadata: DUP 32.00MiB 00:11:15.680 System: DUP 8.00MiB 00:11:15.680 SSD detected: yes 00:11:15.680 Zoned device: no 00:11:15.680 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:15.680 Checksum: crc32c 00:11:15.680 Number of devices: 1 00:11:15.680 Devices: 00:11:15.680 ID SIZE PATH 00:11:15.680 1 510.00MiB /dev/nvme0n1p1 00:11:15.680 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 271068 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.680 10:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.680 00:11:15.680 real 0m0.535s 00:11:15.680 user 0m0.031s 00:11:15.680 sys 0m0.119s 00:11:15.680 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.680 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.680 ************************************ 00:11:15.680 END TEST filesystem_btrfs 00:11:15.680 ************************************ 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.681 ************************************ 00:11:15.681 START TEST filesystem_xfs 00:11:15.681 ************************************ 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:15.681 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:15.681 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:15.681 = sectsz=512 attr=2, projid32bit=1 00:11:15.681 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:15.681 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:15.681 data = bsize=4096 blocks=130560, imaxpct=25 00:11:15.681 = sunit=0 swidth=0 blks 00:11:15.681 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:15.681 log =internal log bsize=4096 blocks=16384, version=2 00:11:15.681 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:15.681 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:17.063 Discarding blocks...Done. 00:11:17.063 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:17.063 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 271068 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.975 00:11:18.975 real 0m3.077s 00:11:18.975 user 0m0.022s 00:11:18.975 sys 0m0.085s 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:18.975 ************************************ 00:11:18.975 END TEST filesystem_xfs 00:11:18.975 ************************************ 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:18.975 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.237 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 271068 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 271068 ']' 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 271068 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 271068 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 271068' 00:11:19.498 killing process with pid 271068 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 271068 00:11:19.498 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 271068 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:19.760 00:11:19.760 real 0m16.932s 00:11:19.760 user 1m6.822s 00:11:19.760 sys 0m1.402s 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.760 ************************************ 00:11:19.760 END TEST nvmf_filesystem_no_in_capsule 00:11:19.760 ************************************ 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.760 ************************************ 00:11:19.760 START TEST nvmf_filesystem_in_capsule 00:11:19.760 ************************************ 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=274507 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 274507 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 274507 ']' 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:19.760 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.760 [2024-11-15 10:51:39.187101] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:11:19.760 [2024-11-15 10:51:39.187160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.760 [2024-11-15 10:51:39.281507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.020 [2024-11-15 10:51:39.316168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.020 [2024-11-15 10:51:39.316199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.020 [2024-11-15 10:51:39.316205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.020 [2024-11-15 10:51:39.316210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.020 [2024-11-15 10:51:39.316214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.020 [2024-11-15 10:51:39.317585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.020 [2024-11-15 10:51:39.317727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.020 [2024-11-15 10:51:39.317967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.020 [2024-11-15 10:51:39.317967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.592 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.592 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:20.592 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.592 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.592 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 [2024-11-15 10:51:40.042105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.592 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.852 Malloc1 00:11:20.852 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.852 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.852 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.852 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.852 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.853 [2024-11-15 10:51:40.178076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:20.853 { 00:11:20.853 "name": "Malloc1", 00:11:20.853 "aliases": [ 00:11:20.853 "5ee2b8b1-0a3a-4462-a20e-2815ea24c974" 00:11:20.853 ], 00:11:20.853 "product_name": "Malloc disk", 00:11:20.853 "block_size": 512, 00:11:20.853 "num_blocks": 1048576, 00:11:20.853 "uuid": "5ee2b8b1-0a3a-4462-a20e-2815ea24c974", 00:11:20.853 "assigned_rate_limits": { 00:11:20.853 "rw_ios_per_sec": 0, 00:11:20.853 "rw_mbytes_per_sec": 0, 00:11:20.853 "r_mbytes_per_sec": 0, 00:11:20.853 "w_mbytes_per_sec": 0 00:11:20.853 }, 00:11:20.853 "claimed": true, 00:11:20.853 "claim_type": "exclusive_write", 00:11:20.853 "zoned": false, 00:11:20.853 "supported_io_types": { 00:11:20.853 "read": true, 00:11:20.853 "write": true, 00:11:20.853 "unmap": true, 00:11:20.853 "flush": true, 00:11:20.853 "reset": true, 00:11:20.853 "nvme_admin": false, 00:11:20.853 "nvme_io": false, 00:11:20.853 "nvme_io_md": false, 00:11:20.853 "write_zeroes": true, 00:11:20.853 "zcopy": true, 00:11:20.853 "get_zone_info": false, 00:11:20.853 "zone_management": false, 00:11:20.853 "zone_append": false, 00:11:20.853 "compare": false, 00:11:20.853 "compare_and_write": false, 00:11:20.853 "abort": true, 00:11:20.853 "seek_hole": false, 00:11:20.853 "seek_data": false, 00:11:20.853 "copy": true, 00:11:20.853 "nvme_iov_md": false 00:11:20.853 }, 00:11:20.853 "memory_domains": [ 00:11:20.853 { 00:11:20.853 "dma_device_id": "system", 00:11:20.853 "dma_device_type": 1 00:11:20.853 }, 00:11:20.853 { 00:11:20.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.853 "dma_device_type": 2 00:11:20.853 } 00:11:20.853 ], 00:11:20.853 "driver_specific": {} 00:11:20.853 } 00:11:20.853 ]' 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:20.853 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.767 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.767 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:22.767 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.767 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:22.767 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:24.679 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:24.679 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:25.249 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.188 ************************************ 00:11:26.188 START TEST filesystem_in_capsule_ext4 00:11:26.188 ************************************ 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:26.188 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:26.189 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:26.189 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:26.189 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:26.189 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:26.189 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:26.189 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:26.189 mke2fs 1.47.0 (5-Feb-2023) 00:11:26.189 Discarding device blocks: 0/522240 done 00:11:26.189 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:26.189 Filesystem UUID: 25bf1b4b-5039-4257-ada9-a673fb8d5249 00:11:26.189 Superblock backups stored on blocks: 00:11:26.189 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:26.189 00:11:26.189 Allocating group tables: 0/64 done 00:11:26.189 Writing inode tables: 0/64 done 00:11:26.450 Creating journal (8192 blocks): done 00:11:26.450 Writing superblocks and filesystem accounting information: 0/64 done 00:11:26.450 00:11:26.450 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:26.450 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.738 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 274507 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.999 00:11:31.999 real 0m5.725s 00:11:31.999 user 0m0.027s 00:11:31.999 sys 0m0.077s 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:31.999 ************************************ 00:11:31.999 END TEST filesystem_in_capsule_ext4 00:11:31.999 ************************************ 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.999 ************************************ 00:11:31.999 START TEST filesystem_in_capsule_btrfs 00:11:31.999 ************************************ 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:31.999 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:32.260 btrfs-progs v6.8.1 00:11:32.260 See https://btrfs.readthedocs.io for more information. 00:11:32.260 00:11:32.260 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:32.260 NOTE: several default settings have changed in version 5.15, please make sure 00:11:32.260 this does not affect your deployments: 00:11:32.260 - DUP for metadata (-m dup) 00:11:32.260 - enabled no-holes (-O no-holes) 00:11:32.260 - enabled free-space-tree (-R free-space-tree) 00:11:32.260 00:11:32.260 Label: (null) 00:11:32.260 UUID: f6250440-2fea-43e2-a33b-4d3bcfb810ce 00:11:32.260 Node size: 16384 00:11:32.260 Sector size: 4096 (CPU page size: 4096) 00:11:32.260 Filesystem size: 510.00MiB 00:11:32.260 Block group profiles: 00:11:32.260 Data: single 8.00MiB 00:11:32.261 Metadata: DUP 32.00MiB 00:11:32.261 System: DUP 8.00MiB 00:11:32.261 SSD detected: yes 00:11:32.261 Zoned device: no 00:11:32.261 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:32.261 Checksum: crc32c 00:11:32.261 Number of devices: 1 00:11:32.261 Devices: 00:11:32.261 ID SIZE PATH 00:11:32.261 1 510.00MiB /dev/nvme0n1p1 00:11:32.261 00:11:32.261 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:32.261 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 274507 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.832 00:11:32.832 real 0m0.897s 00:11:32.832 user 0m0.030s 00:11:32.832 sys 0m0.117s 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.832 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.832 ************************************ 00:11:32.832 END TEST filesystem_in_capsule_btrfs 00:11:32.832 ************************************ 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.093 ************************************ 00:11:33.093 START TEST filesystem_in_capsule_xfs 00:11:33.093 ************************************ 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:33.093 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:33.094 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:33.094 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:33.094 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:33.094 = sectsz=512 attr=2, projid32bit=1 00:11:33.094 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:33.094 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:33.094 data = bsize=4096 blocks=130560, imaxpct=25 00:11:33.094 = sunit=0 swidth=0 blks 00:11:33.094 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:33.094 log =internal log bsize=4096 blocks=16384, version=2 00:11:33.094 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:33.094 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:34.037 Discarding blocks...Done. 00:11:34.037 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:34.037 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 274507 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.579 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.579 00:11:36.580 real 0m3.348s 00:11:36.580 user 0m0.027s 00:11:36.580 sys 0m0.079s 00:11:36.580 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.580 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 ************************************ 00:11:36.580 END TEST filesystem_in_capsule_xfs 00:11:36.580 ************************************ 00:11:36.580 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:36.580 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:36.580 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 274507 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 274507 ']' 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 274507 00:11:36.580 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 274507 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 274507' 00:11:36.840 killing process with pid 274507 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 274507 00:11:36.840 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 274507 00:11:37.101 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:37.101 00:11:37.101 real 0m17.249s 00:11:37.102 user 1m8.188s 00:11:37.102 sys 0m1.383s 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.102 ************************************ 00:11:37.102 END TEST nvmf_filesystem_in_capsule 00:11:37.102 ************************************ 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.102 rmmod nvme_tcp 00:11:37.102 rmmod nvme_fabrics 00:11:37.102 rmmod nvme_keyring 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.102 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.649 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.650 00:11:39.650 real 0m44.478s 00:11:39.650 user 2m17.382s 00:11:39.650 sys 0m8.686s 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.650 ************************************ 00:11:39.650 END TEST nvmf_filesystem 00:11:39.650 ************************************ 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.650 ************************************ 00:11:39.650 START TEST nvmf_target_discovery 00:11:39.650 ************************************ 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:39.650 * Looking for test storage... 00:11:39.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.650 --rc genhtml_branch_coverage=1 00:11:39.650 --rc genhtml_function_coverage=1 00:11:39.650 --rc genhtml_legend=1 00:11:39.650 --rc geninfo_all_blocks=1 00:11:39.650 --rc geninfo_unexecuted_blocks=1 00:11:39.650 00:11:39.650 ' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.650 --rc genhtml_branch_coverage=1 00:11:39.650 --rc genhtml_function_coverage=1 00:11:39.650 --rc genhtml_legend=1 00:11:39.650 --rc geninfo_all_blocks=1 00:11:39.650 --rc geninfo_unexecuted_blocks=1 00:11:39.650 00:11:39.650 ' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.650 --rc genhtml_branch_coverage=1 00:11:39.650 --rc genhtml_function_coverage=1 00:11:39.650 --rc genhtml_legend=1 00:11:39.650 --rc geninfo_all_blocks=1 00:11:39.650 --rc geninfo_unexecuted_blocks=1 00:11:39.650 00:11:39.650 ' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.650 --rc genhtml_branch_coverage=1 00:11:39.650 --rc genhtml_function_coverage=1 00:11:39.650 --rc genhtml_legend=1 00:11:39.650 --rc geninfo_all_blocks=1 00:11:39.650 --rc geninfo_unexecuted_blocks=1 00:11:39.650 00:11:39.650 ' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.650 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.651 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.801 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:47.802 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:47.802 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:47.802 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:47.802 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.802 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:11:47.802 00:11:47.802 --- 10.0.0.2 ping statistics --- 00:11:47.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.802 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:11:47.802 00:11:47.802 --- 10.0.0.1 ping statistics --- 00:11:47.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.802 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.802 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=282295 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 282295 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 282295 ']' 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:47.803 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 [2024-11-15 10:52:06.381173] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:11:47.803 [2024-11-15 10:52:06.381239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.803 [2024-11-15 10:52:06.482913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.803 [2024-11-15 10:52:06.536202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.803 [2024-11-15 10:52:06.536259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.803 [2024-11-15 10:52:06.536268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.803 [2024-11-15 10:52:06.536275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.803 [2024-11-15 10:52:06.536288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.803 [2024-11-15 10:52:06.538533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.803 [2024-11-15 10:52:06.538692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.803 [2024-11-15 10:52:06.538749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.803 [2024-11-15 10:52:06.538889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 [2024-11-15 10:52:07.256431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 Null1 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.803 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.077 [2024-11-15 10:52:07.336880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 Null2 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 Null3 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 Null4 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.078 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:48.371 00:11:48.371 Discovery Log Number of Records 6, Generation counter 6 00:11:48.371 =====Discovery Log Entry 0====== 00:11:48.371 trtype: tcp 00:11:48.371 adrfam: ipv4 00:11:48.371 subtype: current discovery subsystem 00:11:48.371 treq: not required 00:11:48.371 portid: 0 00:11:48.371 trsvcid: 4420 00:11:48.371 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:48.371 traddr: 10.0.0.2 00:11:48.371 eflags: explicit discovery connections, duplicate discovery information 00:11:48.371 sectype: none 00:11:48.371 =====Discovery Log Entry 1====== 00:11:48.371 trtype: tcp 00:11:48.371 adrfam: ipv4 00:11:48.371 subtype: nvme subsystem 00:11:48.371 treq: not required 00:11:48.371 portid: 0 00:11:48.371 trsvcid: 4420 00:11:48.371 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:48.371 traddr: 10.0.0.2 00:11:48.371 eflags: none 00:11:48.371 sectype: none 00:11:48.371 =====Discovery Log Entry 2====== 00:11:48.371 trtype: tcp 00:11:48.371 adrfam: ipv4 00:11:48.371 subtype: nvme subsystem 00:11:48.371 treq: not required 00:11:48.371 portid: 0 00:11:48.371 trsvcid: 4420 00:11:48.371 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:48.371 traddr: 10.0.0.2 00:11:48.371 eflags: none 00:11:48.371 sectype: none 00:11:48.371 =====Discovery Log Entry 3====== 00:11:48.371 trtype: tcp 00:11:48.371 adrfam: ipv4 00:11:48.371 subtype: nvme subsystem 00:11:48.371 treq: not required 00:11:48.371 portid: 0 00:11:48.371 trsvcid: 4420 00:11:48.371 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:48.371 traddr: 10.0.0.2 00:11:48.371 eflags: none 00:11:48.371 sectype: none 00:11:48.371 =====Discovery Log Entry 4====== 00:11:48.371 trtype: tcp 00:11:48.371 adrfam: ipv4 00:11:48.371 subtype: nvme subsystem 00:11:48.371 treq: not required 00:11:48.371 portid: 0 00:11:48.372 trsvcid: 4420 00:11:48.372 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:48.372 traddr: 10.0.0.2 00:11:48.372 eflags: none 00:11:48.372 sectype: none 00:11:48.372 =====Discovery Log Entry 5====== 00:11:48.372 trtype: tcp 00:11:48.372 adrfam: ipv4 00:11:48.372 subtype: discovery subsystem referral 00:11:48.372 treq: not required 00:11:48.372 portid: 0 00:11:48.372 trsvcid: 4430 00:11:48.372 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:48.372 traddr: 10.0.0.2 00:11:48.372 eflags: none 00:11:48.372 sectype: none 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:48.372 Perform nvmf subsystem discovery via RPC 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 [ 00:11:48.372 { 00:11:48.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:48.372 "subtype": "Discovery", 00:11:48.372 "listen_addresses": [ 00:11:48.372 { 00:11:48.372 "trtype": "TCP", 00:11:48.372 "adrfam": "IPv4", 00:11:48.372 "traddr": "10.0.0.2", 00:11:48.372 "trsvcid": "4420" 00:11:48.372 } 00:11:48.372 ], 00:11:48.372 "allow_any_host": true, 00:11:48.372 "hosts": [] 00:11:48.372 }, 00:11:48.372 { 00:11:48.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.372 "subtype": "NVMe", 00:11:48.372 "listen_addresses": [ 00:11:48.372 { 00:11:48.372 "trtype": "TCP", 00:11:48.372 "adrfam": "IPv4", 00:11:48.372 "traddr": "10.0.0.2", 00:11:48.372 "trsvcid": "4420" 00:11:48.372 } 00:11:48.372 ], 00:11:48.372 "allow_any_host": true, 00:11:48.372 "hosts": [], 00:11:48.372 "serial_number": "SPDK00000000000001", 00:11:48.372 "model_number": "SPDK bdev Controller", 00:11:48.372 "max_namespaces": 32, 00:11:48.372 "min_cntlid": 1, 00:11:48.372 "max_cntlid": 65519, 00:11:48.372 "namespaces": [ 00:11:48.372 { 00:11:48.372 "nsid": 1, 00:11:48.372 "bdev_name": "Null1", 00:11:48.372 "name": "Null1", 00:11:48.372 "nguid": "5677754F12C745F1BEB94F637A5CE21D", 00:11:48.372 "uuid": "5677754f-12c7-45f1-beb9-4f637a5ce21d" 00:11:48.372 } 00:11:48.372 ] 00:11:48.372 }, 00:11:48.372 { 00:11:48.372 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:48.372 "subtype": "NVMe", 00:11:48.372 "listen_addresses": [ 00:11:48.372 { 00:11:48.372 "trtype": "TCP", 00:11:48.372 "adrfam": "IPv4", 00:11:48.372 "traddr": "10.0.0.2", 00:11:48.372 "trsvcid": "4420" 00:11:48.372 } 00:11:48.372 ], 00:11:48.372 "allow_any_host": true, 00:11:48.372 "hosts": [], 00:11:48.372 "serial_number": "SPDK00000000000002", 00:11:48.372 "model_number": "SPDK bdev Controller", 00:11:48.372 "max_namespaces": 32, 00:11:48.372 "min_cntlid": 1, 00:11:48.372 "max_cntlid": 65519, 00:11:48.372 "namespaces": [ 00:11:48.372 { 00:11:48.372 "nsid": 1, 00:11:48.372 "bdev_name": "Null2", 00:11:48.372 "name": "Null2", 00:11:48.372 "nguid": "2511D2B7497742A5A080FB76AA4DEA3E", 00:11:48.372 "uuid": "2511d2b7-4977-42a5-a080-fb76aa4dea3e" 00:11:48.372 } 00:11:48.372 ] 00:11:48.372 }, 00:11:48.372 { 00:11:48.372 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:48.372 "subtype": "NVMe", 00:11:48.372 "listen_addresses": [ 00:11:48.372 { 00:11:48.372 "trtype": "TCP", 00:11:48.372 "adrfam": "IPv4", 00:11:48.372 "traddr": "10.0.0.2", 00:11:48.372 "trsvcid": "4420" 00:11:48.372 } 00:11:48.372 ], 00:11:48.372 "allow_any_host": true, 00:11:48.372 "hosts": [], 00:11:48.372 "serial_number": "SPDK00000000000003", 00:11:48.372 "model_number": "SPDK bdev Controller", 00:11:48.372 "max_namespaces": 32, 00:11:48.372 "min_cntlid": 1, 00:11:48.372 "max_cntlid": 65519, 00:11:48.372 "namespaces": [ 00:11:48.372 { 00:11:48.372 "nsid": 1, 00:11:48.372 "bdev_name": "Null3", 00:11:48.372 "name": "Null3", 00:11:48.372 "nguid": "83B8C6CBB2F34320AE4A4E7B5B755115", 00:11:48.372 "uuid": "83b8c6cb-b2f3-4320-ae4a-4e7b5b755115" 00:11:48.372 } 00:11:48.372 ] 00:11:48.372 }, 00:11:48.372 { 00:11:48.372 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:48.372 "subtype": "NVMe", 00:11:48.372 "listen_addresses": [ 00:11:48.372 { 00:11:48.372 "trtype": "TCP", 00:11:48.372 "adrfam": "IPv4", 00:11:48.372 "traddr": "10.0.0.2", 00:11:48.372 "trsvcid": "4420" 00:11:48.372 } 00:11:48.372 ], 00:11:48.372 "allow_any_host": true, 00:11:48.372 "hosts": [], 00:11:48.372 "serial_number": "SPDK00000000000004", 00:11:48.372 "model_number": "SPDK bdev Controller", 00:11:48.372 "max_namespaces": 32, 00:11:48.372 "min_cntlid": 1, 00:11:48.372 "max_cntlid": 65519, 00:11:48.372 "namespaces": [ 00:11:48.372 { 00:11:48.372 "nsid": 1, 00:11:48.372 "bdev_name": "Null4", 00:11:48.372 "name": "Null4", 00:11:48.372 "nguid": "FF4A55359EC24DC5931824884D25A71E", 00:11:48.372 "uuid": "ff4a5535-9ec2-4dc5-9318-24884d25a71e" 00:11:48.372 } 00:11:48.372 ] 00:11:48.372 } 00:11:48.372 ] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.372 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.373 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:48.373 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:48.373 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.373 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.656 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.657 rmmod nvme_tcp 00:11:48.657 rmmod nvme_fabrics 00:11:48.657 rmmod nvme_keyring 00:11:48.657 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 282295 ']' 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 282295 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 282295 ']' 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 282295 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 282295 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 282295' 00:11:48.657 killing process with pid 282295 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 282295 00:11:48.657 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 282295 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.944 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.899 00:11:50.899 real 0m11.683s 00:11:50.899 user 0m9.193s 00:11:50.899 sys 0m6.000s 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.899 ************************************ 00:11:50.899 END TEST nvmf_target_discovery 00:11:50.899 ************************************ 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.899 ************************************ 00:11:50.899 START TEST nvmf_referrals 00:11:50.899 ************************************ 00:11:50.899 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:51.161 * Looking for test storage... 00:11:51.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.161 --rc genhtml_branch_coverage=1 00:11:51.161 --rc genhtml_function_coverage=1 00:11:51.161 --rc genhtml_legend=1 00:11:51.161 --rc geninfo_all_blocks=1 00:11:51.161 --rc geninfo_unexecuted_blocks=1 00:11:51.161 00:11:51.161 ' 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.161 --rc genhtml_branch_coverage=1 00:11:51.161 --rc genhtml_function_coverage=1 00:11:51.161 --rc genhtml_legend=1 00:11:51.161 --rc geninfo_all_blocks=1 00:11:51.161 --rc geninfo_unexecuted_blocks=1 00:11:51.161 00:11:51.161 ' 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.161 --rc genhtml_branch_coverage=1 00:11:51.161 --rc genhtml_function_coverage=1 00:11:51.161 --rc genhtml_legend=1 00:11:51.161 --rc geninfo_all_blocks=1 00:11:51.161 --rc geninfo_unexecuted_blocks=1 00:11:51.161 00:11:51.161 ' 00:11:51.161 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.162 --rc genhtml_branch_coverage=1 00:11:51.162 --rc genhtml_function_coverage=1 00:11:51.162 --rc genhtml_legend=1 00:11:51.162 --rc geninfo_all_blocks=1 00:11:51.162 --rc geninfo_unexecuted_blocks=1 00:11:51.162 00:11:51.162 ' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.162 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:59.309 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:59.309 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:59.309 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:59.309 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.309 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.309 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.309 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.309 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.309 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:11:59.309 00:11:59.309 --- 10.0.0.2 ping statistics --- 00:11:59.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.310 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:59.310 00:11:59.310 --- 10.0.0.1 ping statistics --- 00:11:59.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.310 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=286992 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 286992 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 286992 ']' 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.310 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.310 [2024-11-15 10:52:18.245108] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:11:59.310 [2024-11-15 10:52:18.245172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.310 [2024-11-15 10:52:18.344559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.310 [2024-11-15 10:52:18.397187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.310 [2024-11-15 10:52:18.397234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.310 [2024-11-15 10:52:18.397242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.310 [2024-11-15 10:52:18.397249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.310 [2024-11-15 10:52:18.397256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.310 [2024-11-15 10:52:18.399620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.310 [2024-11-15 10:52:18.399746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.310 [2024-11-15 10:52:18.399907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.310 [2024-11-15 10:52:18.399908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.570 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:59.570 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:59.570 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.570 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.570 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 [2024-11-15 10:52:19.113450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 [2024-11-15 10:52:19.137830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.832 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.093 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:00.094 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:00.094 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.094 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.094 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.094 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.094 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.355 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.617 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.879 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.880 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.142 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.403 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:01.403 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.403 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:01.403 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.403 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.403 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.664 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.664 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.664 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.664 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.924 rmmod nvme_tcp 00:12:01.924 rmmod nvme_fabrics 00:12:01.924 rmmod nvme_keyring 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 286992 ']' 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 286992 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 286992 ']' 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 286992 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 286992 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:01.924 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:01.925 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 286992' 00:12:01.925 killing process with pid 286992 00:12:01.925 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 286992 00:12:01.925 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 286992 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.184 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.098 00:12:04.098 real 0m13.162s 00:12:04.098 user 0m15.459s 00:12:04.098 sys 0m6.537s 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 ************************************ 00:12:04.098 END TEST nvmf_referrals 00:12:04.098 ************************************ 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.098 10:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.360 ************************************ 00:12:04.360 START TEST nvmf_connect_disconnect 00:12:04.360 ************************************ 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:04.360 * Looking for test storage... 00:12:04.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.360 --rc genhtml_branch_coverage=1 00:12:04.360 --rc genhtml_function_coverage=1 00:12:04.360 --rc genhtml_legend=1 00:12:04.360 --rc geninfo_all_blocks=1 00:12:04.360 --rc geninfo_unexecuted_blocks=1 00:12:04.360 00:12:04.360 ' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.360 --rc genhtml_branch_coverage=1 00:12:04.360 --rc genhtml_function_coverage=1 00:12:04.360 --rc genhtml_legend=1 00:12:04.360 --rc geninfo_all_blocks=1 00:12:04.360 --rc geninfo_unexecuted_blocks=1 00:12:04.360 00:12:04.360 ' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.360 --rc genhtml_branch_coverage=1 00:12:04.360 --rc genhtml_function_coverage=1 00:12:04.360 --rc genhtml_legend=1 00:12:04.360 --rc geninfo_all_blocks=1 00:12:04.360 --rc geninfo_unexecuted_blocks=1 00:12:04.360 00:12:04.360 ' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.360 --rc genhtml_branch_coverage=1 00:12:04.360 --rc genhtml_function_coverage=1 00:12:04.360 --rc genhtml_legend=1 00:12:04.360 --rc geninfo_all_blocks=1 00:12:04.360 --rc geninfo_unexecuted_blocks=1 00:12:04.360 00:12:04.360 ' 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.360 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:04.361 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.622 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.623 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.623 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.623 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:12.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:12.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.770 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:12.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:12.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:12:12.771 00:12:12.771 --- 10.0.0.2 ping statistics --- 00:12:12.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.771 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:12:12.771 00:12:12.771 --- 10.0.0.1 ping statistics --- 00:12:12.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.771 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=291774 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 291774 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 291774 ']' 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.771 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.771 [2024-11-15 10:52:31.487313] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:12:12.771 [2024-11-15 10:52:31.487382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.771 [2024-11-15 10:52:31.589527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.771 [2024-11-15 10:52:31.642713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.771 [2024-11-15 10:52:31.642771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.771 [2024-11-15 10:52:31.642779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.771 [2024-11-15 10:52:31.642786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.771 [2024-11-15 10:52:31.642793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.771 [2024-11-15 10:52:31.644936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.771 [2024-11-15 10:52:31.645099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.771 [2024-11-15 10:52:31.645271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.771 [2024-11-15 10:52:31.645272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 [2024-11-15 10:52:32.367913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 [2024-11-15 10:52:32.445998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:13.033 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:17.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.347 rmmod nvme_tcp 00:12:31.347 rmmod nvme_fabrics 00:12:31.347 rmmod nvme_keyring 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 291774 ']' 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 291774 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 291774 ']' 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 291774 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 291774 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 291774' 00:12:31.347 killing process with pid 291774 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 291774 00:12:31.347 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 291774 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.608 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.518 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.518 00:12:33.518 real 0m29.385s 00:12:33.518 user 1m19.024s 00:12:33.518 sys 0m7.213s 00:12:33.518 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:33.518 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.518 ************************************ 00:12:33.518 END TEST nvmf_connect_disconnect 00:12:33.518 ************************************ 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.779 ************************************ 00:12:33.779 START TEST nvmf_multitarget 00:12:33.779 ************************************ 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.779 * Looking for test storage... 00:12:33.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.779 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.040 --rc genhtml_branch_coverage=1 00:12:34.040 --rc genhtml_function_coverage=1 00:12:34.040 --rc genhtml_legend=1 00:12:34.040 --rc geninfo_all_blocks=1 00:12:34.040 --rc geninfo_unexecuted_blocks=1 00:12:34.040 00:12:34.040 ' 00:12:34.040 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.040 --rc genhtml_branch_coverage=1 00:12:34.040 --rc genhtml_function_coverage=1 00:12:34.041 --rc genhtml_legend=1 00:12:34.041 --rc geninfo_all_blocks=1 00:12:34.041 --rc geninfo_unexecuted_blocks=1 00:12:34.041 00:12:34.041 ' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:34.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.041 --rc genhtml_branch_coverage=1 00:12:34.041 --rc genhtml_function_coverage=1 00:12:34.041 --rc genhtml_legend=1 00:12:34.041 --rc geninfo_all_blocks=1 00:12:34.041 --rc geninfo_unexecuted_blocks=1 00:12:34.041 00:12:34.041 ' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:34.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.041 --rc genhtml_branch_coverage=1 00:12:34.041 --rc genhtml_function_coverage=1 00:12:34.041 --rc genhtml_legend=1 00:12:34.041 --rc geninfo_all_blocks=1 00:12:34.041 --rc geninfo_unexecuted_blocks=1 00:12:34.041 00:12:34.041 ' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.041 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:42.183 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:42.183 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:42.183 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:42.183 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.183 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:12:42.184 00:12:42.184 --- 10.0.0.2 ping statistics --- 00:12:42.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.184 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:12:42.184 00:12:42.184 --- 10.0.0.1 ping statistics --- 00:12:42.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.184 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=299889 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 299889 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 299889 ']' 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.184 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 [2024-11-15 10:53:00.936556] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:12:42.184 [2024-11-15 10:53:00.936638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.184 [2024-11-15 10:53:01.035868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.184 [2024-11-15 10:53:01.089213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.184 [2024-11-15 10:53:01.089265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.184 [2024-11-15 10:53:01.089278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.184 [2024-11-15 10:53:01.089286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.184 [2024-11-15 10:53:01.089292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.184 [2024-11-15 10:53:01.091379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.184 [2024-11-15 10:53:01.091543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.184 [2024-11-15 10:53:01.091706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.184 [2024-11-15 10:53:01.091835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:42.446 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:42.707 "nvmf_tgt_1" 00:12:42.708 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:42.708 "nvmf_tgt_2" 00:12:42.708 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.708 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:42.968 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:42.968 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:42.968 true 00:12:42.968 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:42.968 true 00:12:42.969 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.969 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.230 rmmod nvme_tcp 00:12:43.230 rmmod nvme_fabrics 00:12:43.230 rmmod nvme_keyring 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 299889 ']' 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 299889 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 299889 ']' 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 299889 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.230 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 299889 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 299889' 00:12:43.490 killing process with pid 299889 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 299889 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 299889 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.490 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.034 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.034 00:12:46.034 real 0m11.881s 00:12:46.034 user 0m10.415s 00:12:46.034 sys 0m6.098s 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.034 ************************************ 00:12:46.034 END TEST nvmf_multitarget 00:12:46.034 ************************************ 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.034 ************************************ 00:12:46.034 START TEST nvmf_rpc 00:12:46.034 ************************************ 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.034 * Looking for test storage... 00:12:46.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:46.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.034 --rc genhtml_branch_coverage=1 00:12:46.034 --rc genhtml_function_coverage=1 00:12:46.034 --rc genhtml_legend=1 00:12:46.034 --rc geninfo_all_blocks=1 00:12:46.034 --rc geninfo_unexecuted_blocks=1 00:12:46.034 00:12:46.034 ' 00:12:46.034 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:46.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.035 --rc genhtml_branch_coverage=1 00:12:46.035 --rc genhtml_function_coverage=1 00:12:46.035 --rc genhtml_legend=1 00:12:46.035 --rc geninfo_all_blocks=1 00:12:46.035 --rc geninfo_unexecuted_blocks=1 00:12:46.035 00:12:46.035 ' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:46.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.035 --rc genhtml_branch_coverage=1 00:12:46.035 --rc genhtml_function_coverage=1 00:12:46.035 --rc genhtml_legend=1 00:12:46.035 --rc geninfo_all_blocks=1 00:12:46.035 --rc geninfo_unexecuted_blocks=1 00:12:46.035 00:12:46.035 ' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:46.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.035 --rc genhtml_branch_coverage=1 00:12:46.035 --rc genhtml_function_coverage=1 00:12:46.035 --rc genhtml_legend=1 00:12:46.035 --rc geninfo_all_blocks=1 00:12:46.035 --rc geninfo_unexecuted_blocks=1 00:12:46.035 00:12:46.035 ' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.035 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:54.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:54.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:54.177 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:54.177 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.177 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:12:54.178 00:12:54.178 --- 10.0.0.2 ping statistics --- 00:12:54.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.178 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:12:54.178 00:12:54.178 --- 10.0.0.1 ping statistics --- 00:12:54.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.178 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=304576 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 304576 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 304576 ']' 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.178 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 [2024-11-15 10:53:12.899990] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:12:54.178 [2024-11-15 10:53:12.900060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.178 [2024-11-15 10:53:13.004532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.178 [2024-11-15 10:53:13.058179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.178 [2024-11-15 10:53:13.058230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.178 [2024-11-15 10:53:13.058239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.178 [2024-11-15 10:53:13.058246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.178 [2024-11-15 10:53:13.058252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.178 [2024-11-15 10:53:13.060612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.178 [2024-11-15 10:53:13.060810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.178 [2024-11-15 10:53:13.060963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.178 [2024-11-15 10:53:13.060966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.438 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.438 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:54.439 "tick_rate": 2400000000, 00:12:54.439 "poll_groups": [ 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_000", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [] 00:12:54.439 }, 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_001", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [] 00:12:54.439 }, 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_002", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [] 00:12:54.439 }, 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_003", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [] 00:12:54.439 } 00:12:54.439 ] 00:12:54.439 }' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.439 [2024-11-15 10:53:13.899043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:54.439 "tick_rate": 2400000000, 00:12:54.439 "poll_groups": [ 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_000", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [ 00:12:54.439 { 00:12:54.439 "trtype": "TCP" 00:12:54.439 } 00:12:54.439 ] 00:12:54.439 }, 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_001", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [ 00:12:54.439 { 00:12:54.439 "trtype": "TCP" 00:12:54.439 } 00:12:54.439 ] 00:12:54.439 }, 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_002", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [ 00:12:54.439 { 00:12:54.439 "trtype": "TCP" 00:12:54.439 } 00:12:54.439 ] 00:12:54.439 }, 00:12:54.439 { 00:12:54.439 "name": "nvmf_tgt_poll_group_003", 00:12:54.439 "admin_qpairs": 0, 00:12:54.439 "io_qpairs": 0, 00:12:54.439 "current_admin_qpairs": 0, 00:12:54.439 "current_io_qpairs": 0, 00:12:54.439 "pending_bdev_io": 0, 00:12:54.439 "completed_nvme_io": 0, 00:12:54.439 "transports": [ 00:12:54.439 { 00:12:54.439 "trtype": "TCP" 00:12:54.439 } 00:12:54.439 ] 00:12:54.439 } 00:12:54.439 ] 00:12:54.439 }' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.439 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.700 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:54.700 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.700 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.700 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.700 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.700 Malloc1 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.700 [2024-11-15 10:53:14.108382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:54.700 [2024-11-15 10:53:14.145367] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:54.700 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:54.700 could not add new controller: failed to write to nvme-fabrics device 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.700 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.615 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.615 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:56.615 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.615 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:56.616 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.527 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.528 [2024-11-15 10:53:17.880710] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:58.528 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.528 could not add new controller: failed to write to nvme-fabrics device 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.528 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.438 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.438 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:00.438 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.438 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:00.438 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.348 [2024-11-15 10:53:21.637602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.348 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.729 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.729 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:03.729 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.729 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:03.729 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.270 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.271 [2024-11-15 10:53:25.398805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.271 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.655 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.655 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:07.655 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.655 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:07.655 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:09.632 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:09.632 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:09.632 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.632 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.632 [2024-11-15 10:53:29.159586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.891 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.273 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.273 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:11.273 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.273 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:11.274 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 [2024-11-15 10:53:32.919016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.818 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.818 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.818 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.818 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.818 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.818 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.201 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.201 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:15.201 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.201 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:15.201 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.111 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.372 [2024-11-15 10:53:36.672789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.372 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.753 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.753 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:18.753 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.753 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:18.753 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:21.293 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 [2024-11-15 10:53:40.427131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 [2024-11-15 10:53:40.495322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 [2024-11-15 10:53:40.559504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 [2024-11-15 10:53:40.627704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 [2024-11-15 10:53:40.695927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:21.295 "tick_rate": 2400000000, 00:13:21.295 "poll_groups": [ 00:13:21.295 { 00:13:21.295 "name": "nvmf_tgt_poll_group_000", 00:13:21.295 "admin_qpairs": 0, 00:13:21.295 "io_qpairs": 224, 00:13:21.295 "current_admin_qpairs": 0, 00:13:21.295 "current_io_qpairs": 0, 00:13:21.295 "pending_bdev_io": 0, 00:13:21.295 "completed_nvme_io": 409, 00:13:21.295 "transports": [ 00:13:21.295 { 00:13:21.295 "trtype": "TCP" 00:13:21.295 } 00:13:21.295 ] 00:13:21.295 }, 00:13:21.295 { 00:13:21.295 "name": "nvmf_tgt_poll_group_001", 00:13:21.295 "admin_qpairs": 1, 00:13:21.295 "io_qpairs": 223, 00:13:21.295 "current_admin_qpairs": 0, 00:13:21.295 "current_io_qpairs": 0, 00:13:21.295 "pending_bdev_io": 0, 00:13:21.295 "completed_nvme_io": 225, 00:13:21.295 "transports": [ 00:13:21.295 { 00:13:21.295 "trtype": "TCP" 00:13:21.295 } 00:13:21.295 ] 00:13:21.295 }, 00:13:21.295 { 00:13:21.295 "name": "nvmf_tgt_poll_group_002", 00:13:21.295 "admin_qpairs": 6, 00:13:21.295 "io_qpairs": 218, 00:13:21.295 "current_admin_qpairs": 0, 00:13:21.295 "current_io_qpairs": 0, 00:13:21.295 "pending_bdev_io": 0, 00:13:21.295 "completed_nvme_io": 328, 00:13:21.295 "transports": [ 00:13:21.295 { 00:13:21.295 "trtype": "TCP" 00:13:21.295 } 00:13:21.295 ] 00:13:21.295 }, 00:13:21.295 { 00:13:21.295 "name": "nvmf_tgt_poll_group_003", 00:13:21.295 "admin_qpairs": 0, 00:13:21.295 "io_qpairs": 224, 00:13:21.295 "current_admin_qpairs": 0, 00:13:21.295 "current_io_qpairs": 0, 00:13:21.295 "pending_bdev_io": 0, 00:13:21.295 "completed_nvme_io": 277, 00:13:21.295 "transports": [ 00:13:21.295 { 00:13:21.295 "trtype": "TCP" 00:13:21.295 } 00:13:21.295 ] 00:13:21.295 } 00:13:21.295 ] 00:13:21.295 }' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:21.295 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.555 rmmod nvme_tcp 00:13:21.555 rmmod nvme_fabrics 00:13:21.555 rmmod nvme_keyring 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 304576 ']' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 304576 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 304576 ']' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 304576 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 304576 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 304576' 00:13:21.555 killing process with pid 304576 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 304576 00:13:21.555 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 304576 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.815 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.725 00:13:23.725 real 0m38.109s 00:13:23.725 user 1m54.138s 00:13:23.725 sys 0m7.913s 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.725 ************************************ 00:13:23.725 END TEST nvmf_rpc 00:13:23.725 ************************************ 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.725 10:53:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.986 ************************************ 00:13:23.986 START TEST nvmf_invalid 00:13:23.986 ************************************ 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:23.986 * Looking for test storage... 00:13:23.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:23.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.986 --rc genhtml_branch_coverage=1 00:13:23.986 --rc genhtml_function_coverage=1 00:13:23.986 --rc genhtml_legend=1 00:13:23.986 --rc geninfo_all_blocks=1 00:13:23.986 --rc geninfo_unexecuted_blocks=1 00:13:23.986 00:13:23.986 ' 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:23.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.986 --rc genhtml_branch_coverage=1 00:13:23.986 --rc genhtml_function_coverage=1 00:13:23.986 --rc genhtml_legend=1 00:13:23.986 --rc geninfo_all_blocks=1 00:13:23.986 --rc geninfo_unexecuted_blocks=1 00:13:23.986 00:13:23.986 ' 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:23.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.986 --rc genhtml_branch_coverage=1 00:13:23.986 --rc genhtml_function_coverage=1 00:13:23.986 --rc genhtml_legend=1 00:13:23.986 --rc geninfo_all_blocks=1 00:13:23.986 --rc geninfo_unexecuted_blocks=1 00:13:23.986 00:13:23.986 ' 00:13:23.986 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:23.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.986 --rc genhtml_branch_coverage=1 00:13:23.986 --rc genhtml_function_coverage=1 00:13:23.986 --rc genhtml_legend=1 00:13:23.986 --rc geninfo_all_blocks=1 00:13:23.986 --rc geninfo_unexecuted_blocks=1 00:13:23.986 00:13:23.986 ' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.987 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.248 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.248 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.248 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.248 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:32.528 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:32.528 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.528 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:32.529 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:32.529 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.529 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:13:32.529 00:13:32.529 --- 10.0.0.2 ping statistics --- 00:13:32.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.529 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:13:32.529 00:13:32.529 --- 10.0.0.1 ping statistics --- 00:13:32.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.529 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=314355 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 314355 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 314355 ']' 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.529 [2024-11-15 10:53:51.133684] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:13:32.529 [2024-11-15 10:53:51.133749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.529 [2024-11-15 10:53:51.233887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.529 [2024-11-15 10:53:51.288255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.529 [2024-11-15 10:53:51.288308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.529 [2024-11-15 10:53:51.288317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.529 [2024-11-15 10:53:51.288324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.529 [2024-11-15 10:53:51.288330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.529 [2024-11-15 10:53:51.290698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.529 [2024-11-15 10:53:51.290857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.529 [2024-11-15 10:53:51.291019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.529 [2024-11-15 10:53:51.291019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.529 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.529 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.529 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:32.529 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15663 00:13:32.789 [2024-11-15 10:53:52.172964] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:32.789 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:32.789 { 00:13:32.789 "nqn": "nqn.2016-06.io.spdk:cnode15663", 00:13:32.789 "tgt_name": "foobar", 00:13:32.789 "method": "nvmf_create_subsystem", 00:13:32.789 "req_id": 1 00:13:32.789 } 00:13:32.789 Got JSON-RPC error response 00:13:32.789 response: 00:13:32.789 { 00:13:32.790 "code": -32603, 00:13:32.790 "message": "Unable to find target foobar" 00:13:32.790 }' 00:13:32.790 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:32.790 { 00:13:32.790 "nqn": "nqn.2016-06.io.spdk:cnode15663", 00:13:32.790 "tgt_name": "foobar", 00:13:32.790 "method": "nvmf_create_subsystem", 00:13:32.790 "req_id": 1 00:13:32.790 } 00:13:32.790 Got JSON-RPC error response 00:13:32.790 response: 00:13:32.790 { 00:13:32.790 "code": -32603, 00:13:32.790 "message": "Unable to find target foobar" 00:13:32.790 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:32.790 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:32.790 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21112 00:13:33.051 [2024-11-15 10:53:52.381835] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21112: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:33.051 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:33.051 { 00:13:33.051 "nqn": "nqn.2016-06.io.spdk:cnode21112", 00:13:33.051 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:33.051 "method": "nvmf_create_subsystem", 00:13:33.051 "req_id": 1 00:13:33.051 } 00:13:33.051 Got JSON-RPC error response 00:13:33.051 response: 00:13:33.051 { 00:13:33.051 "code": -32602, 00:13:33.051 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:33.051 }' 00:13:33.051 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:33.051 { 00:13:33.051 "nqn": "nqn.2016-06.io.spdk:cnode21112", 00:13:33.051 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:33.051 "method": "nvmf_create_subsystem", 00:13:33.051 "req_id": 1 00:13:33.051 } 00:13:33.051 Got JSON-RPC error response 00:13:33.051 response: 00:13:33.051 { 00:13:33.051 "code": -32602, 00:13:33.051 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:33.051 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:33.051 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:33.051 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2561 00:13:33.312 [2024-11-15 10:53:52.586572] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2561: invalid model number 'SPDK_Controller' 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:33.312 { 00:13:33.312 "nqn": "nqn.2016-06.io.spdk:cnode2561", 00:13:33.312 "model_number": "SPDK_Controller\u001f", 00:13:33.312 "method": "nvmf_create_subsystem", 00:13:33.312 "req_id": 1 00:13:33.312 } 00:13:33.312 Got JSON-RPC error response 00:13:33.312 response: 00:13:33.312 { 00:13:33.312 "code": -32602, 00:13:33.312 "message": "Invalid MN SPDK_Controller\u001f" 00:13:33.312 }' 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:33.312 { 00:13:33.312 "nqn": "nqn.2016-06.io.spdk:cnode2561", 00:13:33.312 "model_number": "SPDK_Controller\u001f", 00:13:33.312 "method": "nvmf_create_subsystem", 00:13:33.312 "req_id": 1 00:13:33.312 } 00:13:33.312 Got JSON-RPC error response 00:13:33.312 response: 00:13:33.312 { 00:13:33.312 "code": -32602, 00:13:33.312 "message": "Invalid MN SPDK_Controller\u001f" 00:13:33.312 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.312 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:13:33.313 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0K$Tto~h,2ayd(~{\k7=' 00:13:33.314 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '0K$Tto~h,2ayd(~{\k7=' nqn.2016-06.io.spdk:cnode3307 00:13:33.574 [2024-11-15 10:53:52.968004] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3307: invalid serial number '0K$Tto~h,2ayd(~{\k7=' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:33.574 { 00:13:33.574 "nqn": "nqn.2016-06.io.spdk:cnode3307", 00:13:33.574 "serial_number": "0K$Tto~h,2a\u007fyd(~{\\k7=", 00:13:33.574 "method": "nvmf_create_subsystem", 00:13:33.574 "req_id": 1 00:13:33.574 } 00:13:33.574 Got JSON-RPC error response 00:13:33.574 response: 00:13:33.574 { 00:13:33.574 "code": -32602, 00:13:33.574 "message": "Invalid SN 0K$Tto~h,2a\u007fyd(~{\\k7=" 00:13:33.574 }' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:33.574 { 00:13:33.574 "nqn": "nqn.2016-06.io.spdk:cnode3307", 00:13:33.574 "serial_number": "0K$Tto~h,2a\u007fyd(~{\\k7=", 00:13:33.574 "method": "nvmf_create_subsystem", 00:13:33.574 "req_id": 1 00:13:33.574 } 00:13:33.574 Got JSON-RPC error response 00:13:33.574 response: 00:13:33.574 { 00:13:33.574 "code": -32602, 00:13:33.574 "message": "Invalid SN 0K$Tto~h,2a\u007fyd(~{\\k7=" 00:13:33.574 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.574 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:33.575 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:33.575 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:33.575 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.575 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.835 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:33.835 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:33.835 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:33.835 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.835 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.835 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.836 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|YBJz!4z^w_*1]yqNJ}"b89 =S_3R0"jDw@8C}`p' 00:13:33.837 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '|YBJz!4z^w_*1]yqNJ}"b89 =S_3R0"jDw@8C}`p' nqn.2016-06.io.spdk:cnode26144 00:13:34.097 [2024-11-15 10:53:53.506110] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26144: invalid model number '|YBJz!4z^w_*1]yqNJ}"b89 =S_3R0"jDw@8C}`p' 00:13:34.097 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:34.097 { 00:13:34.097 "nqn": "nqn.2016-06.io.spdk:cnode26144", 00:13:34.097 "model_number": "|YBJz!4z^w_*1]yqNJ}\"b89 =S_3R0\"jDw@8C}`\u007fp", 00:13:34.097 "method": "nvmf_create_subsystem", 00:13:34.097 "req_id": 1 00:13:34.097 } 00:13:34.097 Got JSON-RPC error response 00:13:34.097 response: 00:13:34.097 { 00:13:34.097 "code": -32602, 00:13:34.097 "message": "Invalid MN |YBJz!4z^w_*1]yqNJ}\"b89 =S_3R0\"jDw@8C}`\u007fp" 00:13:34.097 }' 00:13:34.097 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:34.097 { 00:13:34.097 "nqn": "nqn.2016-06.io.spdk:cnode26144", 00:13:34.097 "model_number": "|YBJz!4z^w_*1]yqNJ}\"b89 =S_3R0\"jDw@8C}`\u007fp", 00:13:34.097 "method": "nvmf_create_subsystem", 00:13:34.097 "req_id": 1 00:13:34.097 } 00:13:34.097 Got JSON-RPC error response 00:13:34.097 response: 00:13:34.097 { 00:13:34.097 "code": -32602, 00:13:34.097 "message": "Invalid MN |YBJz!4z^w_*1]yqNJ}\"b89 =S_3R0\"jDw@8C}`\u007fp" 00:13:34.097 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:34.097 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:34.357 [2024-11-15 10:53:53.706998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.357 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:34.617 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:34.617 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:34.617 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:34.617 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:34.617 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:34.617 [2024-11-15 10:53:54.096206] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:34.617 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:34.617 { 00:13:34.617 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.617 "listen_address": { 00:13:34.617 "trtype": "tcp", 00:13:34.617 "traddr": "", 00:13:34.617 "trsvcid": "4421" 00:13:34.617 }, 00:13:34.617 "method": "nvmf_subsystem_remove_listener", 00:13:34.617 "req_id": 1 00:13:34.617 } 00:13:34.617 Got JSON-RPC error response 00:13:34.617 response: 00:13:34.617 { 00:13:34.617 "code": -32602, 00:13:34.617 "message": "Invalid parameters" 00:13:34.617 }' 00:13:34.617 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:34.617 { 00:13:34.617 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.617 "listen_address": { 00:13:34.617 "trtype": "tcp", 00:13:34.617 "traddr": "", 00:13:34.617 "trsvcid": "4421" 00:13:34.617 }, 00:13:34.617 "method": "nvmf_subsystem_remove_listener", 00:13:34.617 "req_id": 1 00:13:34.617 } 00:13:34.617 Got JSON-RPC error response 00:13:34.617 response: 00:13:34.617 { 00:13:34.617 "code": -32602, 00:13:34.617 "message": "Invalid parameters" 00:13:34.617 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:34.617 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9511 -i 0 00:13:34.876 [2024-11-15 10:53:54.284760] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9511: invalid cntlid range [0-65519] 00:13:34.876 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:34.876 { 00:13:34.876 "nqn": "nqn.2016-06.io.spdk:cnode9511", 00:13:34.876 "min_cntlid": 0, 00:13:34.876 "method": "nvmf_create_subsystem", 00:13:34.876 "req_id": 1 00:13:34.876 } 00:13:34.876 Got JSON-RPC error response 00:13:34.876 response: 00:13:34.876 { 00:13:34.876 "code": -32602, 00:13:34.876 "message": "Invalid cntlid range [0-65519]" 00:13:34.876 }' 00:13:34.876 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:34.876 { 00:13:34.876 "nqn": "nqn.2016-06.io.spdk:cnode9511", 00:13:34.876 "min_cntlid": 0, 00:13:34.876 "method": "nvmf_create_subsystem", 00:13:34.876 "req_id": 1 00:13:34.876 } 00:13:34.876 Got JSON-RPC error response 00:13:34.876 response: 00:13:34.876 { 00:13:34.876 "code": -32602, 00:13:34.876 "message": "Invalid cntlid range [0-65519]" 00:13:34.876 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.876 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32670 -i 65520 00:13:35.136 [2024-11-15 10:53:54.469350] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32670: invalid cntlid range [65520-65519] 00:13:35.136 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:35.136 { 00:13:35.136 "nqn": "nqn.2016-06.io.spdk:cnode32670", 00:13:35.136 "min_cntlid": 65520, 00:13:35.136 "method": "nvmf_create_subsystem", 00:13:35.136 "req_id": 1 00:13:35.136 } 00:13:35.136 Got JSON-RPC error response 00:13:35.136 response: 00:13:35.136 { 00:13:35.136 "code": -32602, 00:13:35.136 "message": "Invalid cntlid range [65520-65519]" 00:13:35.136 }' 00:13:35.136 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:35.136 { 00:13:35.136 "nqn": "nqn.2016-06.io.spdk:cnode32670", 00:13:35.136 "min_cntlid": 65520, 00:13:35.136 "method": "nvmf_create_subsystem", 00:13:35.136 "req_id": 1 00:13:35.136 } 00:13:35.136 Got JSON-RPC error response 00:13:35.136 response: 00:13:35.136 { 00:13:35.136 "code": -32602, 00:13:35.136 "message": "Invalid cntlid range [65520-65519]" 00:13:35.136 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.136 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20188 -I 0 00:13:35.136 [2024-11-15 10:53:54.657911] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20188: invalid cntlid range [1-0] 00:13:35.398 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:35.398 { 00:13:35.398 "nqn": "nqn.2016-06.io.spdk:cnode20188", 00:13:35.398 "max_cntlid": 0, 00:13:35.398 "method": "nvmf_create_subsystem", 00:13:35.398 "req_id": 1 00:13:35.398 } 00:13:35.398 Got JSON-RPC error response 00:13:35.398 response: 00:13:35.398 { 00:13:35.398 "code": -32602, 00:13:35.398 "message": "Invalid cntlid range [1-0]" 00:13:35.398 }' 00:13:35.398 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:35.398 { 00:13:35.398 "nqn": "nqn.2016-06.io.spdk:cnode20188", 00:13:35.398 "max_cntlid": 0, 00:13:35.398 "method": "nvmf_create_subsystem", 00:13:35.398 "req_id": 1 00:13:35.398 } 00:13:35.398 Got JSON-RPC error response 00:13:35.398 response: 00:13:35.398 { 00:13:35.398 "code": -32602, 00:13:35.398 "message": "Invalid cntlid range [1-0]" 00:13:35.398 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.398 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7604 -I 65520 00:13:35.398 [2024-11-15 10:53:54.830449] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7604: invalid cntlid range [1-65520] 00:13:35.398 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:35.398 { 00:13:35.398 "nqn": "nqn.2016-06.io.spdk:cnode7604", 00:13:35.398 "max_cntlid": 65520, 00:13:35.398 "method": "nvmf_create_subsystem", 00:13:35.398 "req_id": 1 00:13:35.398 } 00:13:35.398 Got JSON-RPC error response 00:13:35.398 response: 00:13:35.398 { 00:13:35.398 "code": -32602, 00:13:35.398 "message": "Invalid cntlid range [1-65520]" 00:13:35.398 }' 00:13:35.398 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:35.398 { 00:13:35.398 "nqn": "nqn.2016-06.io.spdk:cnode7604", 00:13:35.398 "max_cntlid": 65520, 00:13:35.398 "method": "nvmf_create_subsystem", 00:13:35.398 "req_id": 1 00:13:35.398 } 00:13:35.398 Got JSON-RPC error response 00:13:35.398 response: 00:13:35.398 { 00:13:35.398 "code": -32602, 00:13:35.398 "message": "Invalid cntlid range [1-65520]" 00:13:35.398 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.398 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16407 -i 6 -I 5 00:13:35.659 [2024-11-15 10:53:55.002991] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16407: invalid cntlid range [6-5] 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:35.659 { 00:13:35.659 "nqn": "nqn.2016-06.io.spdk:cnode16407", 00:13:35.659 "min_cntlid": 6, 00:13:35.659 "max_cntlid": 5, 00:13:35.659 "method": "nvmf_create_subsystem", 00:13:35.659 "req_id": 1 00:13:35.659 } 00:13:35.659 Got JSON-RPC error response 00:13:35.659 response: 00:13:35.659 { 00:13:35.659 "code": -32602, 00:13:35.659 "message": "Invalid cntlid range [6-5]" 00:13:35.659 }' 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:35.659 { 00:13:35.659 "nqn": "nqn.2016-06.io.spdk:cnode16407", 00:13:35.659 "min_cntlid": 6, 00:13:35.659 "max_cntlid": 5, 00:13:35.659 "method": "nvmf_create_subsystem", 00:13:35.659 "req_id": 1 00:13:35.659 } 00:13:35.659 Got JSON-RPC error response 00:13:35.659 response: 00:13:35.659 { 00:13:35.659 "code": -32602, 00:13:35.659 "message": "Invalid cntlid range [6-5]" 00:13:35.659 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:35.659 { 00:13:35.659 "name": "foobar", 00:13:35.659 "method": "nvmf_delete_target", 00:13:35.659 "req_id": 1 00:13:35.659 } 00:13:35.659 Got JSON-RPC error response 00:13:35.659 response: 00:13:35.659 { 00:13:35.659 "code": -32602, 00:13:35.659 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:35.659 }' 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:35.659 { 00:13:35.659 "name": "foobar", 00:13:35.659 "method": "nvmf_delete_target", 00:13:35.659 "req_id": 1 00:13:35.659 } 00:13:35.659 Got JSON-RPC error response 00:13:35.659 response: 00:13:35.659 { 00:13:35.659 "code": -32602, 00:13:35.659 "message": "The specified target doesn't exist, cannot delete it." 00:13:35.659 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.659 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.659 rmmod nvme_tcp 00:13:35.659 rmmod nvme_fabrics 00:13:35.659 rmmod nvme_keyring 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 314355 ']' 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 314355 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 314355 ']' 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 314355 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 314355 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 314355' 00:13:35.919 killing process with pid 314355 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 314355 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 314355 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.919 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:38.460 00:13:38.460 real 0m14.202s 00:13:38.460 user 0m21.020s 00:13:38.460 sys 0m6.775s 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.460 ************************************ 00:13:38.460 END TEST nvmf_invalid 00:13:38.460 ************************************ 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.460 ************************************ 00:13:38.460 START TEST nvmf_connect_stress 00:13:38.460 ************************************ 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:38.460 * Looking for test storage... 00:13:38.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.460 --rc genhtml_branch_coverage=1 00:13:38.460 --rc genhtml_function_coverage=1 00:13:38.460 --rc genhtml_legend=1 00:13:38.460 --rc geninfo_all_blocks=1 00:13:38.460 --rc geninfo_unexecuted_blocks=1 00:13:38.460 00:13:38.460 ' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.460 --rc genhtml_branch_coverage=1 00:13:38.460 --rc genhtml_function_coverage=1 00:13:38.460 --rc genhtml_legend=1 00:13:38.460 --rc geninfo_all_blocks=1 00:13:38.460 --rc geninfo_unexecuted_blocks=1 00:13:38.460 00:13:38.460 ' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.460 --rc genhtml_branch_coverage=1 00:13:38.460 --rc genhtml_function_coverage=1 00:13:38.460 --rc genhtml_legend=1 00:13:38.460 --rc geninfo_all_blocks=1 00:13:38.460 --rc geninfo_unexecuted_blocks=1 00:13:38.460 00:13:38.460 ' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.460 --rc genhtml_branch_coverage=1 00:13:38.460 --rc genhtml_function_coverage=1 00:13:38.460 --rc genhtml_legend=1 00:13:38.460 --rc geninfo_all_blocks=1 00:13:38.460 --rc geninfo_unexecuted_blocks=1 00:13:38.460 00:13:38.460 ' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.460 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.461 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:46.599 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.599 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:46.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:46.600 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:46.600 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.600 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:46.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:13:46.600 00:13:46.600 --- 10.0.0.2 ping statistics --- 00:13:46.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.600 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:13:46.600 00:13:46.600 --- 10.0.0.1 ping statistics --- 00:13:46.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.600 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=319677 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 319677 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 319677 ']' 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:46.600 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.600 [2024-11-15 10:54:05.376531] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:13:46.600 [2024-11-15 10:54:05.376606] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.600 [2024-11-15 10:54:05.478304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.600 [2024-11-15 10:54:05.529624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.600 [2024-11-15 10:54:05.529670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.600 [2024-11-15 10:54:05.529679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.600 [2024-11-15 10:54:05.529686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.600 [2024-11-15 10:54:05.529693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.600 [2024-11-15 10:54:05.531554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.600 [2024-11-15 10:54:05.531630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.600 [2024-11-15 10:54:05.531663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.862 [2024-11-15 10:54:06.244729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.862 [2024-11-15 10:54:06.270381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.862 NULL1 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=319773 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.862 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.435 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.435 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:47.435 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.435 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.435 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.696 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.696 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:47.696 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.696 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.696 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.957 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.957 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:47.957 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.957 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.957 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.218 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:48.218 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.218 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.218 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.789 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.789 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:48.789 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.789 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.789 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.050 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:49.050 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.050 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.050 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.310 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.310 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:49.310 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.310 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.310 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.571 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:49.571 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.571 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.830 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.830 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:49.830 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.830 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.830 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.401 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.401 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:50.401 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.401 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.401 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.662 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.662 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:50.662 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.662 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.662 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.923 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.923 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:50.923 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.923 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.923 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:51.183 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.183 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.444 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.444 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:51.444 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.444 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.444 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.014 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.014 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:52.014 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.014 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.014 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.275 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.275 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:52.275 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.275 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.275 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.535 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.535 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:52.535 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.535 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.535 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.796 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.796 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:52.796 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.796 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.796 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.055 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.055 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:53.055 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.055 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.055 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.626 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.626 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:53.626 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.626 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.626 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.887 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.887 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:53.887 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.887 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.887 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.148 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.148 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:54.148 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.148 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.148 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.409 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.409 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:54.409 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.409 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.409 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.982 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.982 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:54.982 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.982 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.982 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.242 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.242 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:55.242 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.242 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.242 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.501 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.501 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:55.501 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.501 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.501 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.761 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.761 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:55.761 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.761 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.761 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.021 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.021 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:56.021 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.021 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.021 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.592 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.592 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:56.592 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.592 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.592 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.853 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.853 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:56.853 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.853 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.853 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.113 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.113 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.113 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 319773 00:13:57.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (319773) - No such process 00:13:57.113 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 319773 00:13:57.113 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:57.113 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:57.113 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.114 rmmod nvme_tcp 00:13:57.114 rmmod nvme_fabrics 00:13:57.114 rmmod nvme_keyring 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 319677 ']' 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 319677 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 319677 ']' 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 319677 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 319677 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 319677' 00:13:57.114 killing process with pid 319677 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 319677 00:13:57.114 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 319677 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.374 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.289 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.289 00:13:59.289 real 0m21.264s 00:13:59.289 user 0m42.100s 00:13:59.289 sys 0m9.373s 00:13:59.289 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.550 ************************************ 00:13:59.550 END TEST nvmf_connect_stress 00:13:59.550 ************************************ 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.550 ************************************ 00:13:59.550 START TEST nvmf_fused_ordering 00:13:59.550 ************************************ 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.550 * Looking for test storage... 00:13:59.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.550 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:59.550 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:59.550 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:59.550 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:59.550 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.550 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.812 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.813 --rc genhtml_branch_coverage=1 00:13:59.813 --rc genhtml_function_coverage=1 00:13:59.813 --rc genhtml_legend=1 00:13:59.813 --rc geninfo_all_blocks=1 00:13:59.813 --rc geninfo_unexecuted_blocks=1 00:13:59.813 00:13:59.813 ' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.813 --rc genhtml_branch_coverage=1 00:13:59.813 --rc genhtml_function_coverage=1 00:13:59.813 --rc genhtml_legend=1 00:13:59.813 --rc geninfo_all_blocks=1 00:13:59.813 --rc geninfo_unexecuted_blocks=1 00:13:59.813 00:13:59.813 ' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.813 --rc genhtml_branch_coverage=1 00:13:59.813 --rc genhtml_function_coverage=1 00:13:59.813 --rc genhtml_legend=1 00:13:59.813 --rc geninfo_all_blocks=1 00:13:59.813 --rc geninfo_unexecuted_blocks=1 00:13:59.813 00:13:59.813 ' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.813 --rc genhtml_branch_coverage=1 00:13:59.813 --rc genhtml_function_coverage=1 00:13:59.813 --rc genhtml_legend=1 00:13:59.813 --rc geninfo_all_blocks=1 00:13:59.813 --rc geninfo_unexecuted_blocks=1 00:13:59.813 00:13:59.813 ' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.813 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:07.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:07.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:07.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:07.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:14:07.957 00:14:07.957 --- 10.0.0.2 ping statistics --- 00:14:07.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.957 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:14:07.957 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:14:07.957 00:14:07.957 --- 10.0.0.1 ping statistics --- 00:14:07.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.957 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=326578 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 326578 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 326578 ']' 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.958 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.958 [2024-11-15 10:54:26.659390] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:14:07.958 [2024-11-15 10:54:26.659457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.958 [2024-11-15 10:54:26.760929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.958 [2024-11-15 10:54:26.812101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.958 [2024-11-15 10:54:26.812152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.958 [2024-11-15 10:54:26.812161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.958 [2024-11-15 10:54:26.812168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.958 [2024-11-15 10:54:26.812174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.958 [2024-11-15 10:54:26.812959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.958 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.958 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:07.958 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.958 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:07.958 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 [2024-11-15 10:54:27.525410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 [2024-11-15 10:54:27.549704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 NULL1 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:08.219 [2024-11-15 10:54:27.620167] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:14:08.219 [2024-11-15 10:54:27.620232] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326633 ] 00:14:08.792 Attached to nqn.2016-06.io.spdk:cnode1 00:14:08.792 Namespace ID: 1 size: 1GB 00:14:08.792 fused_ordering(0) 00:14:08.792 fused_ordering(1) 00:14:08.792 fused_ordering(2) 00:14:08.792 fused_ordering(3) 00:14:08.792 fused_ordering(4) 00:14:08.792 fused_ordering(5) 00:14:08.792 fused_ordering(6) 00:14:08.792 fused_ordering(7) 00:14:08.792 fused_ordering(8) 00:14:08.792 fused_ordering(9) 00:14:08.792 fused_ordering(10) 00:14:08.792 fused_ordering(11) 00:14:08.792 fused_ordering(12) 00:14:08.792 fused_ordering(13) 00:14:08.792 fused_ordering(14) 00:14:08.792 fused_ordering(15) 00:14:08.792 fused_ordering(16) 00:14:08.792 fused_ordering(17) 00:14:08.792 fused_ordering(18) 00:14:08.792 fused_ordering(19) 00:14:08.792 fused_ordering(20) 00:14:08.792 fused_ordering(21) 00:14:08.792 fused_ordering(22) 00:14:08.792 fused_ordering(23) 00:14:08.792 fused_ordering(24) 00:14:08.792 fused_ordering(25) 00:14:08.792 fused_ordering(26) 00:14:08.792 fused_ordering(27) 00:14:08.792 fused_ordering(28) 00:14:08.792 fused_ordering(29) 00:14:08.792 fused_ordering(30) 00:14:08.792 fused_ordering(31) 00:14:08.792 fused_ordering(32) 00:14:08.792 fused_ordering(33) 00:14:08.792 fused_ordering(34) 00:14:08.792 fused_ordering(35) 00:14:08.792 fused_ordering(36) 00:14:08.792 fused_ordering(37) 00:14:08.792 fused_ordering(38) 00:14:08.792 fused_ordering(39) 00:14:08.792 fused_ordering(40) 00:14:08.792 fused_ordering(41) 00:14:08.792 fused_ordering(42) 00:14:08.792 fused_ordering(43) 00:14:08.792 fused_ordering(44) 00:14:08.792 fused_ordering(45) 00:14:08.792 fused_ordering(46) 00:14:08.792 fused_ordering(47) 00:14:08.792 fused_ordering(48) 00:14:08.792 fused_ordering(49) 00:14:08.792 fused_ordering(50) 00:14:08.792 fused_ordering(51) 00:14:08.793 fused_ordering(52) 00:14:08.793 fused_ordering(53) 00:14:08.793 fused_ordering(54) 00:14:08.793 fused_ordering(55) 00:14:08.793 fused_ordering(56) 00:14:08.793 fused_ordering(57) 00:14:08.793 fused_ordering(58) 00:14:08.793 fused_ordering(59) 00:14:08.793 fused_ordering(60) 00:14:08.793 fused_ordering(61) 00:14:08.793 fused_ordering(62) 00:14:08.793 fused_ordering(63) 00:14:08.793 fused_ordering(64) 00:14:08.793 fused_ordering(65) 00:14:08.793 fused_ordering(66) 00:14:08.793 fused_ordering(67) 00:14:08.793 fused_ordering(68) 00:14:08.793 fused_ordering(69) 00:14:08.793 fused_ordering(70) 00:14:08.793 fused_ordering(71) 00:14:08.793 fused_ordering(72) 00:14:08.793 fused_ordering(73) 00:14:08.793 fused_ordering(74) 00:14:08.793 fused_ordering(75) 00:14:08.793 fused_ordering(76) 00:14:08.793 fused_ordering(77) 00:14:08.793 fused_ordering(78) 00:14:08.793 fused_ordering(79) 00:14:08.793 fused_ordering(80) 00:14:08.793 fused_ordering(81) 00:14:08.793 fused_ordering(82) 00:14:08.793 fused_ordering(83) 00:14:08.793 fused_ordering(84) 00:14:08.793 fused_ordering(85) 00:14:08.793 fused_ordering(86) 00:14:08.793 fused_ordering(87) 00:14:08.793 fused_ordering(88) 00:14:08.793 fused_ordering(89) 00:14:08.793 fused_ordering(90) 00:14:08.793 fused_ordering(91) 00:14:08.793 fused_ordering(92) 00:14:08.793 fused_ordering(93) 00:14:08.793 fused_ordering(94) 00:14:08.793 fused_ordering(95) 00:14:08.793 fused_ordering(96) 00:14:08.793 fused_ordering(97) 00:14:08.793 fused_ordering(98) 00:14:08.793 fused_ordering(99) 00:14:08.793 fused_ordering(100) 00:14:08.793 fused_ordering(101) 00:14:08.793 fused_ordering(102) 00:14:08.793 fused_ordering(103) 00:14:08.793 fused_ordering(104) 00:14:08.793 fused_ordering(105) 00:14:08.793 fused_ordering(106) 00:14:08.793 fused_ordering(107) 00:14:08.793 fused_ordering(108) 00:14:08.793 fused_ordering(109) 00:14:08.793 fused_ordering(110) 00:14:08.793 fused_ordering(111) 00:14:08.793 fused_ordering(112) 00:14:08.793 fused_ordering(113) 00:14:08.793 fused_ordering(114) 00:14:08.793 fused_ordering(115) 00:14:08.793 fused_ordering(116) 00:14:08.793 fused_ordering(117) 00:14:08.793 fused_ordering(118) 00:14:08.793 fused_ordering(119) 00:14:08.793 fused_ordering(120) 00:14:08.793 fused_ordering(121) 00:14:08.793 fused_ordering(122) 00:14:08.793 fused_ordering(123) 00:14:08.793 fused_ordering(124) 00:14:08.793 fused_ordering(125) 00:14:08.793 fused_ordering(126) 00:14:08.793 fused_ordering(127) 00:14:08.793 fused_ordering(128) 00:14:08.793 fused_ordering(129) 00:14:08.793 fused_ordering(130) 00:14:08.793 fused_ordering(131) 00:14:08.793 fused_ordering(132) 00:14:08.793 fused_ordering(133) 00:14:08.793 fused_ordering(134) 00:14:08.793 fused_ordering(135) 00:14:08.793 fused_ordering(136) 00:14:08.793 fused_ordering(137) 00:14:08.793 fused_ordering(138) 00:14:08.793 fused_ordering(139) 00:14:08.793 fused_ordering(140) 00:14:08.793 fused_ordering(141) 00:14:08.793 fused_ordering(142) 00:14:08.793 fused_ordering(143) 00:14:08.793 fused_ordering(144) 00:14:08.793 fused_ordering(145) 00:14:08.793 fused_ordering(146) 00:14:08.793 fused_ordering(147) 00:14:08.793 fused_ordering(148) 00:14:08.793 fused_ordering(149) 00:14:08.793 fused_ordering(150) 00:14:08.793 fused_ordering(151) 00:14:08.793 fused_ordering(152) 00:14:08.793 fused_ordering(153) 00:14:08.793 fused_ordering(154) 00:14:08.793 fused_ordering(155) 00:14:08.793 fused_ordering(156) 00:14:08.793 fused_ordering(157) 00:14:08.793 fused_ordering(158) 00:14:08.793 fused_ordering(159) 00:14:08.793 fused_ordering(160) 00:14:08.793 fused_ordering(161) 00:14:08.793 fused_ordering(162) 00:14:08.793 fused_ordering(163) 00:14:08.793 fused_ordering(164) 00:14:08.793 fused_ordering(165) 00:14:08.793 fused_ordering(166) 00:14:08.793 fused_ordering(167) 00:14:08.793 fused_ordering(168) 00:14:08.793 fused_ordering(169) 00:14:08.793 fused_ordering(170) 00:14:08.793 fused_ordering(171) 00:14:08.793 fused_ordering(172) 00:14:08.793 fused_ordering(173) 00:14:08.793 fused_ordering(174) 00:14:08.793 fused_ordering(175) 00:14:08.793 fused_ordering(176) 00:14:08.793 fused_ordering(177) 00:14:08.793 fused_ordering(178) 00:14:08.793 fused_ordering(179) 00:14:08.793 fused_ordering(180) 00:14:08.793 fused_ordering(181) 00:14:08.793 fused_ordering(182) 00:14:08.793 fused_ordering(183) 00:14:08.793 fused_ordering(184) 00:14:08.793 fused_ordering(185) 00:14:08.793 fused_ordering(186) 00:14:08.793 fused_ordering(187) 00:14:08.793 fused_ordering(188) 00:14:08.793 fused_ordering(189) 00:14:08.793 fused_ordering(190) 00:14:08.793 fused_ordering(191) 00:14:08.793 fused_ordering(192) 00:14:08.793 fused_ordering(193) 00:14:08.793 fused_ordering(194) 00:14:08.793 fused_ordering(195) 00:14:08.793 fused_ordering(196) 00:14:08.793 fused_ordering(197) 00:14:08.793 fused_ordering(198) 00:14:08.793 fused_ordering(199) 00:14:08.793 fused_ordering(200) 00:14:08.793 fused_ordering(201) 00:14:08.793 fused_ordering(202) 00:14:08.793 fused_ordering(203) 00:14:08.793 fused_ordering(204) 00:14:08.793 fused_ordering(205) 00:14:09.055 fused_ordering(206) 00:14:09.055 fused_ordering(207) 00:14:09.055 fused_ordering(208) 00:14:09.055 fused_ordering(209) 00:14:09.055 fused_ordering(210) 00:14:09.055 fused_ordering(211) 00:14:09.055 fused_ordering(212) 00:14:09.055 fused_ordering(213) 00:14:09.055 fused_ordering(214) 00:14:09.055 fused_ordering(215) 00:14:09.055 fused_ordering(216) 00:14:09.055 fused_ordering(217) 00:14:09.055 fused_ordering(218) 00:14:09.055 fused_ordering(219) 00:14:09.055 fused_ordering(220) 00:14:09.055 fused_ordering(221) 00:14:09.055 fused_ordering(222) 00:14:09.055 fused_ordering(223) 00:14:09.055 fused_ordering(224) 00:14:09.055 fused_ordering(225) 00:14:09.055 fused_ordering(226) 00:14:09.055 fused_ordering(227) 00:14:09.055 fused_ordering(228) 00:14:09.055 fused_ordering(229) 00:14:09.055 fused_ordering(230) 00:14:09.055 fused_ordering(231) 00:14:09.055 fused_ordering(232) 00:14:09.055 fused_ordering(233) 00:14:09.055 fused_ordering(234) 00:14:09.055 fused_ordering(235) 00:14:09.055 fused_ordering(236) 00:14:09.055 fused_ordering(237) 00:14:09.055 fused_ordering(238) 00:14:09.055 fused_ordering(239) 00:14:09.055 fused_ordering(240) 00:14:09.055 fused_ordering(241) 00:14:09.055 fused_ordering(242) 00:14:09.055 fused_ordering(243) 00:14:09.055 fused_ordering(244) 00:14:09.055 fused_ordering(245) 00:14:09.055 fused_ordering(246) 00:14:09.055 fused_ordering(247) 00:14:09.055 fused_ordering(248) 00:14:09.055 fused_ordering(249) 00:14:09.055 fused_ordering(250) 00:14:09.055 fused_ordering(251) 00:14:09.055 fused_ordering(252) 00:14:09.055 fused_ordering(253) 00:14:09.055 fused_ordering(254) 00:14:09.055 fused_ordering(255) 00:14:09.055 fused_ordering(256) 00:14:09.055 fused_ordering(257) 00:14:09.055 fused_ordering(258) 00:14:09.055 fused_ordering(259) 00:14:09.055 fused_ordering(260) 00:14:09.055 fused_ordering(261) 00:14:09.055 fused_ordering(262) 00:14:09.055 fused_ordering(263) 00:14:09.055 fused_ordering(264) 00:14:09.055 fused_ordering(265) 00:14:09.055 fused_ordering(266) 00:14:09.055 fused_ordering(267) 00:14:09.055 fused_ordering(268) 00:14:09.055 fused_ordering(269) 00:14:09.055 fused_ordering(270) 00:14:09.055 fused_ordering(271) 00:14:09.055 fused_ordering(272) 00:14:09.055 fused_ordering(273) 00:14:09.055 fused_ordering(274) 00:14:09.055 fused_ordering(275) 00:14:09.055 fused_ordering(276) 00:14:09.055 fused_ordering(277) 00:14:09.055 fused_ordering(278) 00:14:09.055 fused_ordering(279) 00:14:09.055 fused_ordering(280) 00:14:09.055 fused_ordering(281) 00:14:09.055 fused_ordering(282) 00:14:09.055 fused_ordering(283) 00:14:09.055 fused_ordering(284) 00:14:09.055 fused_ordering(285) 00:14:09.055 fused_ordering(286) 00:14:09.055 fused_ordering(287) 00:14:09.055 fused_ordering(288) 00:14:09.055 fused_ordering(289) 00:14:09.055 fused_ordering(290) 00:14:09.055 fused_ordering(291) 00:14:09.055 fused_ordering(292) 00:14:09.055 fused_ordering(293) 00:14:09.055 fused_ordering(294) 00:14:09.055 fused_ordering(295) 00:14:09.055 fused_ordering(296) 00:14:09.055 fused_ordering(297) 00:14:09.055 fused_ordering(298) 00:14:09.055 fused_ordering(299) 00:14:09.055 fused_ordering(300) 00:14:09.055 fused_ordering(301) 00:14:09.055 fused_ordering(302) 00:14:09.055 fused_ordering(303) 00:14:09.055 fused_ordering(304) 00:14:09.055 fused_ordering(305) 00:14:09.055 fused_ordering(306) 00:14:09.055 fused_ordering(307) 00:14:09.055 fused_ordering(308) 00:14:09.055 fused_ordering(309) 00:14:09.055 fused_ordering(310) 00:14:09.055 fused_ordering(311) 00:14:09.055 fused_ordering(312) 00:14:09.055 fused_ordering(313) 00:14:09.055 fused_ordering(314) 00:14:09.055 fused_ordering(315) 00:14:09.055 fused_ordering(316) 00:14:09.055 fused_ordering(317) 00:14:09.055 fused_ordering(318) 00:14:09.055 fused_ordering(319) 00:14:09.055 fused_ordering(320) 00:14:09.055 fused_ordering(321) 00:14:09.055 fused_ordering(322) 00:14:09.055 fused_ordering(323) 00:14:09.055 fused_ordering(324) 00:14:09.055 fused_ordering(325) 00:14:09.055 fused_ordering(326) 00:14:09.055 fused_ordering(327) 00:14:09.055 fused_ordering(328) 00:14:09.055 fused_ordering(329) 00:14:09.055 fused_ordering(330) 00:14:09.055 fused_ordering(331) 00:14:09.055 fused_ordering(332) 00:14:09.055 fused_ordering(333) 00:14:09.055 fused_ordering(334) 00:14:09.055 fused_ordering(335) 00:14:09.055 fused_ordering(336) 00:14:09.055 fused_ordering(337) 00:14:09.055 fused_ordering(338) 00:14:09.055 fused_ordering(339) 00:14:09.055 fused_ordering(340) 00:14:09.055 fused_ordering(341) 00:14:09.055 fused_ordering(342) 00:14:09.055 fused_ordering(343) 00:14:09.055 fused_ordering(344) 00:14:09.055 fused_ordering(345) 00:14:09.055 fused_ordering(346) 00:14:09.055 fused_ordering(347) 00:14:09.055 fused_ordering(348) 00:14:09.055 fused_ordering(349) 00:14:09.055 fused_ordering(350) 00:14:09.055 fused_ordering(351) 00:14:09.055 fused_ordering(352) 00:14:09.055 fused_ordering(353) 00:14:09.055 fused_ordering(354) 00:14:09.055 fused_ordering(355) 00:14:09.055 fused_ordering(356) 00:14:09.055 fused_ordering(357) 00:14:09.055 fused_ordering(358) 00:14:09.055 fused_ordering(359) 00:14:09.055 fused_ordering(360) 00:14:09.055 fused_ordering(361) 00:14:09.055 fused_ordering(362) 00:14:09.055 fused_ordering(363) 00:14:09.055 fused_ordering(364) 00:14:09.055 fused_ordering(365) 00:14:09.055 fused_ordering(366) 00:14:09.055 fused_ordering(367) 00:14:09.055 fused_ordering(368) 00:14:09.055 fused_ordering(369) 00:14:09.055 fused_ordering(370) 00:14:09.055 fused_ordering(371) 00:14:09.055 fused_ordering(372) 00:14:09.055 fused_ordering(373) 00:14:09.055 fused_ordering(374) 00:14:09.055 fused_ordering(375) 00:14:09.055 fused_ordering(376) 00:14:09.055 fused_ordering(377) 00:14:09.055 fused_ordering(378) 00:14:09.055 fused_ordering(379) 00:14:09.055 fused_ordering(380) 00:14:09.055 fused_ordering(381) 00:14:09.055 fused_ordering(382) 00:14:09.055 fused_ordering(383) 00:14:09.055 fused_ordering(384) 00:14:09.055 fused_ordering(385) 00:14:09.055 fused_ordering(386) 00:14:09.055 fused_ordering(387) 00:14:09.055 fused_ordering(388) 00:14:09.055 fused_ordering(389) 00:14:09.055 fused_ordering(390) 00:14:09.055 fused_ordering(391) 00:14:09.055 fused_ordering(392) 00:14:09.055 fused_ordering(393) 00:14:09.055 fused_ordering(394) 00:14:09.055 fused_ordering(395) 00:14:09.055 fused_ordering(396) 00:14:09.055 fused_ordering(397) 00:14:09.055 fused_ordering(398) 00:14:09.055 fused_ordering(399) 00:14:09.055 fused_ordering(400) 00:14:09.055 fused_ordering(401) 00:14:09.055 fused_ordering(402) 00:14:09.055 fused_ordering(403) 00:14:09.055 fused_ordering(404) 00:14:09.055 fused_ordering(405) 00:14:09.055 fused_ordering(406) 00:14:09.055 fused_ordering(407) 00:14:09.055 fused_ordering(408) 00:14:09.055 fused_ordering(409) 00:14:09.055 fused_ordering(410) 00:14:09.628 fused_ordering(411) 00:14:09.628 fused_ordering(412) 00:14:09.628 fused_ordering(413) 00:14:09.628 fused_ordering(414) 00:14:09.628 fused_ordering(415) 00:14:09.628 fused_ordering(416) 00:14:09.628 fused_ordering(417) 00:14:09.628 fused_ordering(418) 00:14:09.628 fused_ordering(419) 00:14:09.628 fused_ordering(420) 00:14:09.628 fused_ordering(421) 00:14:09.628 fused_ordering(422) 00:14:09.628 fused_ordering(423) 00:14:09.628 fused_ordering(424) 00:14:09.628 fused_ordering(425) 00:14:09.628 fused_ordering(426) 00:14:09.628 fused_ordering(427) 00:14:09.628 fused_ordering(428) 00:14:09.628 fused_ordering(429) 00:14:09.628 fused_ordering(430) 00:14:09.628 fused_ordering(431) 00:14:09.628 fused_ordering(432) 00:14:09.628 fused_ordering(433) 00:14:09.628 fused_ordering(434) 00:14:09.628 fused_ordering(435) 00:14:09.628 fused_ordering(436) 00:14:09.628 fused_ordering(437) 00:14:09.628 fused_ordering(438) 00:14:09.628 fused_ordering(439) 00:14:09.628 fused_ordering(440) 00:14:09.628 fused_ordering(441) 00:14:09.628 fused_ordering(442) 00:14:09.628 fused_ordering(443) 00:14:09.628 fused_ordering(444) 00:14:09.628 fused_ordering(445) 00:14:09.628 fused_ordering(446) 00:14:09.628 fused_ordering(447) 00:14:09.628 fused_ordering(448) 00:14:09.628 fused_ordering(449) 00:14:09.628 fused_ordering(450) 00:14:09.628 fused_ordering(451) 00:14:09.628 fused_ordering(452) 00:14:09.628 fused_ordering(453) 00:14:09.628 fused_ordering(454) 00:14:09.628 fused_ordering(455) 00:14:09.628 fused_ordering(456) 00:14:09.628 fused_ordering(457) 00:14:09.628 fused_ordering(458) 00:14:09.628 fused_ordering(459) 00:14:09.628 fused_ordering(460) 00:14:09.628 fused_ordering(461) 00:14:09.628 fused_ordering(462) 00:14:09.628 fused_ordering(463) 00:14:09.628 fused_ordering(464) 00:14:09.628 fused_ordering(465) 00:14:09.628 fused_ordering(466) 00:14:09.628 fused_ordering(467) 00:14:09.628 fused_ordering(468) 00:14:09.628 fused_ordering(469) 00:14:09.628 fused_ordering(470) 00:14:09.628 fused_ordering(471) 00:14:09.628 fused_ordering(472) 00:14:09.628 fused_ordering(473) 00:14:09.628 fused_ordering(474) 00:14:09.628 fused_ordering(475) 00:14:09.628 fused_ordering(476) 00:14:09.628 fused_ordering(477) 00:14:09.628 fused_ordering(478) 00:14:09.628 fused_ordering(479) 00:14:09.628 fused_ordering(480) 00:14:09.628 fused_ordering(481) 00:14:09.628 fused_ordering(482) 00:14:09.628 fused_ordering(483) 00:14:09.628 fused_ordering(484) 00:14:09.628 fused_ordering(485) 00:14:09.628 fused_ordering(486) 00:14:09.628 fused_ordering(487) 00:14:09.628 fused_ordering(488) 00:14:09.628 fused_ordering(489) 00:14:09.628 fused_ordering(490) 00:14:09.628 fused_ordering(491) 00:14:09.628 fused_ordering(492) 00:14:09.628 fused_ordering(493) 00:14:09.628 fused_ordering(494) 00:14:09.628 fused_ordering(495) 00:14:09.628 fused_ordering(496) 00:14:09.628 fused_ordering(497) 00:14:09.628 fused_ordering(498) 00:14:09.628 fused_ordering(499) 00:14:09.628 fused_ordering(500) 00:14:09.628 fused_ordering(501) 00:14:09.628 fused_ordering(502) 00:14:09.628 fused_ordering(503) 00:14:09.628 fused_ordering(504) 00:14:09.628 fused_ordering(505) 00:14:09.628 fused_ordering(506) 00:14:09.628 fused_ordering(507) 00:14:09.628 fused_ordering(508) 00:14:09.628 fused_ordering(509) 00:14:09.628 fused_ordering(510) 00:14:09.628 fused_ordering(511) 00:14:09.628 fused_ordering(512) 00:14:09.628 fused_ordering(513) 00:14:09.628 fused_ordering(514) 00:14:09.628 fused_ordering(515) 00:14:09.628 fused_ordering(516) 00:14:09.628 fused_ordering(517) 00:14:09.628 fused_ordering(518) 00:14:09.628 fused_ordering(519) 00:14:09.628 fused_ordering(520) 00:14:09.628 fused_ordering(521) 00:14:09.628 fused_ordering(522) 00:14:09.628 fused_ordering(523) 00:14:09.628 fused_ordering(524) 00:14:09.628 fused_ordering(525) 00:14:09.628 fused_ordering(526) 00:14:09.628 fused_ordering(527) 00:14:09.628 fused_ordering(528) 00:14:09.628 fused_ordering(529) 00:14:09.628 fused_ordering(530) 00:14:09.628 fused_ordering(531) 00:14:09.628 fused_ordering(532) 00:14:09.628 fused_ordering(533) 00:14:09.628 fused_ordering(534) 00:14:09.628 fused_ordering(535) 00:14:09.628 fused_ordering(536) 00:14:09.628 fused_ordering(537) 00:14:09.628 fused_ordering(538) 00:14:09.628 fused_ordering(539) 00:14:09.628 fused_ordering(540) 00:14:09.628 fused_ordering(541) 00:14:09.628 fused_ordering(542) 00:14:09.628 fused_ordering(543) 00:14:09.628 fused_ordering(544) 00:14:09.628 fused_ordering(545) 00:14:09.628 fused_ordering(546) 00:14:09.628 fused_ordering(547) 00:14:09.628 fused_ordering(548) 00:14:09.628 fused_ordering(549) 00:14:09.628 fused_ordering(550) 00:14:09.628 fused_ordering(551) 00:14:09.628 fused_ordering(552) 00:14:09.628 fused_ordering(553) 00:14:09.628 fused_ordering(554) 00:14:09.628 fused_ordering(555) 00:14:09.628 fused_ordering(556) 00:14:09.628 fused_ordering(557) 00:14:09.628 fused_ordering(558) 00:14:09.628 fused_ordering(559) 00:14:09.628 fused_ordering(560) 00:14:09.628 fused_ordering(561) 00:14:09.628 fused_ordering(562) 00:14:09.628 fused_ordering(563) 00:14:09.628 fused_ordering(564) 00:14:09.628 fused_ordering(565) 00:14:09.628 fused_ordering(566) 00:14:09.628 fused_ordering(567) 00:14:09.628 fused_ordering(568) 00:14:09.628 fused_ordering(569) 00:14:09.628 fused_ordering(570) 00:14:09.628 fused_ordering(571) 00:14:09.628 fused_ordering(572) 00:14:09.628 fused_ordering(573) 00:14:09.628 fused_ordering(574) 00:14:09.628 fused_ordering(575) 00:14:09.628 fused_ordering(576) 00:14:09.628 fused_ordering(577) 00:14:09.628 fused_ordering(578) 00:14:09.628 fused_ordering(579) 00:14:09.628 fused_ordering(580) 00:14:09.628 fused_ordering(581) 00:14:09.628 fused_ordering(582) 00:14:09.628 fused_ordering(583) 00:14:09.628 fused_ordering(584) 00:14:09.628 fused_ordering(585) 00:14:09.628 fused_ordering(586) 00:14:09.628 fused_ordering(587) 00:14:09.628 fused_ordering(588) 00:14:09.629 fused_ordering(589) 00:14:09.629 fused_ordering(590) 00:14:09.629 fused_ordering(591) 00:14:09.629 fused_ordering(592) 00:14:09.629 fused_ordering(593) 00:14:09.629 fused_ordering(594) 00:14:09.629 fused_ordering(595) 00:14:09.629 fused_ordering(596) 00:14:09.629 fused_ordering(597) 00:14:09.629 fused_ordering(598) 00:14:09.629 fused_ordering(599) 00:14:09.629 fused_ordering(600) 00:14:09.629 fused_ordering(601) 00:14:09.629 fused_ordering(602) 00:14:09.629 fused_ordering(603) 00:14:09.629 fused_ordering(604) 00:14:09.629 fused_ordering(605) 00:14:09.629 fused_ordering(606) 00:14:09.629 fused_ordering(607) 00:14:09.629 fused_ordering(608) 00:14:09.629 fused_ordering(609) 00:14:09.629 fused_ordering(610) 00:14:09.629 fused_ordering(611) 00:14:09.629 fused_ordering(612) 00:14:09.629 fused_ordering(613) 00:14:09.629 fused_ordering(614) 00:14:09.629 fused_ordering(615) 00:14:10.200 fused_ordering(616) 00:14:10.200 fused_ordering(617) 00:14:10.200 fused_ordering(618) 00:14:10.200 fused_ordering(619) 00:14:10.200 fused_ordering(620) 00:14:10.200 fused_ordering(621) 00:14:10.200 fused_ordering(622) 00:14:10.200 fused_ordering(623) 00:14:10.200 fused_ordering(624) 00:14:10.200 fused_ordering(625) 00:14:10.200 fused_ordering(626) 00:14:10.200 fused_ordering(627) 00:14:10.200 fused_ordering(628) 00:14:10.200 fused_ordering(629) 00:14:10.200 fused_ordering(630) 00:14:10.200 fused_ordering(631) 00:14:10.200 fused_ordering(632) 00:14:10.200 fused_ordering(633) 00:14:10.200 fused_ordering(634) 00:14:10.200 fused_ordering(635) 00:14:10.200 fused_ordering(636) 00:14:10.200 fused_ordering(637) 00:14:10.200 fused_ordering(638) 00:14:10.200 fused_ordering(639) 00:14:10.200 fused_ordering(640) 00:14:10.200 fused_ordering(641) 00:14:10.200 fused_ordering(642) 00:14:10.200 fused_ordering(643) 00:14:10.200 fused_ordering(644) 00:14:10.200 fused_ordering(645) 00:14:10.200 fused_ordering(646) 00:14:10.200 fused_ordering(647) 00:14:10.200 fused_ordering(648) 00:14:10.200 fused_ordering(649) 00:14:10.200 fused_ordering(650) 00:14:10.200 fused_ordering(651) 00:14:10.200 fused_ordering(652) 00:14:10.200 fused_ordering(653) 00:14:10.200 fused_ordering(654) 00:14:10.200 fused_ordering(655) 00:14:10.200 fused_ordering(656) 00:14:10.200 fused_ordering(657) 00:14:10.200 fused_ordering(658) 00:14:10.200 fused_ordering(659) 00:14:10.200 fused_ordering(660) 00:14:10.200 fused_ordering(661) 00:14:10.200 fused_ordering(662) 00:14:10.200 fused_ordering(663) 00:14:10.200 fused_ordering(664) 00:14:10.200 fused_ordering(665) 00:14:10.200 fused_ordering(666) 00:14:10.200 fused_ordering(667) 00:14:10.200 fused_ordering(668) 00:14:10.200 fused_ordering(669) 00:14:10.200 fused_ordering(670) 00:14:10.200 fused_ordering(671) 00:14:10.200 fused_ordering(672) 00:14:10.200 fused_ordering(673) 00:14:10.200 fused_ordering(674) 00:14:10.200 fused_ordering(675) 00:14:10.200 fused_ordering(676) 00:14:10.200 fused_ordering(677) 00:14:10.200 fused_ordering(678) 00:14:10.200 fused_ordering(679) 00:14:10.200 fused_ordering(680) 00:14:10.200 fused_ordering(681) 00:14:10.200 fused_ordering(682) 00:14:10.200 fused_ordering(683) 00:14:10.200 fused_ordering(684) 00:14:10.200 fused_ordering(685) 00:14:10.200 fused_ordering(686) 00:14:10.200 fused_ordering(687) 00:14:10.200 fused_ordering(688) 00:14:10.200 fused_ordering(689) 00:14:10.200 fused_ordering(690) 00:14:10.200 fused_ordering(691) 00:14:10.200 fused_ordering(692) 00:14:10.200 fused_ordering(693) 00:14:10.200 fused_ordering(694) 00:14:10.200 fused_ordering(695) 00:14:10.200 fused_ordering(696) 00:14:10.200 fused_ordering(697) 00:14:10.200 fused_ordering(698) 00:14:10.200 fused_ordering(699) 00:14:10.200 fused_ordering(700) 00:14:10.200 fused_ordering(701) 00:14:10.200 fused_ordering(702) 00:14:10.200 fused_ordering(703) 00:14:10.200 fused_ordering(704) 00:14:10.200 fused_ordering(705) 00:14:10.200 fused_ordering(706) 00:14:10.200 fused_ordering(707) 00:14:10.200 fused_ordering(708) 00:14:10.200 fused_ordering(709) 00:14:10.200 fused_ordering(710) 00:14:10.201 fused_ordering(711) 00:14:10.201 fused_ordering(712) 00:14:10.201 fused_ordering(713) 00:14:10.201 fused_ordering(714) 00:14:10.201 fused_ordering(715) 00:14:10.201 fused_ordering(716) 00:14:10.201 fused_ordering(717) 00:14:10.201 fused_ordering(718) 00:14:10.201 fused_ordering(719) 00:14:10.201 fused_ordering(720) 00:14:10.201 fused_ordering(721) 00:14:10.201 fused_ordering(722) 00:14:10.201 fused_ordering(723) 00:14:10.201 fused_ordering(724) 00:14:10.201 fused_ordering(725) 00:14:10.201 fused_ordering(726) 00:14:10.201 fused_ordering(727) 00:14:10.201 fused_ordering(728) 00:14:10.201 fused_ordering(729) 00:14:10.201 fused_ordering(730) 00:14:10.201 fused_ordering(731) 00:14:10.201 fused_ordering(732) 00:14:10.201 fused_ordering(733) 00:14:10.201 fused_ordering(734) 00:14:10.201 fused_ordering(735) 00:14:10.201 fused_ordering(736) 00:14:10.201 fused_ordering(737) 00:14:10.201 fused_ordering(738) 00:14:10.201 fused_ordering(739) 00:14:10.201 fused_ordering(740) 00:14:10.201 fused_ordering(741) 00:14:10.201 fused_ordering(742) 00:14:10.201 fused_ordering(743) 00:14:10.201 fused_ordering(744) 00:14:10.201 fused_ordering(745) 00:14:10.201 fused_ordering(746) 00:14:10.201 fused_ordering(747) 00:14:10.201 fused_ordering(748) 00:14:10.201 fused_ordering(749) 00:14:10.201 fused_ordering(750) 00:14:10.201 fused_ordering(751) 00:14:10.201 fused_ordering(752) 00:14:10.201 fused_ordering(753) 00:14:10.201 fused_ordering(754) 00:14:10.201 fused_ordering(755) 00:14:10.201 fused_ordering(756) 00:14:10.201 fused_ordering(757) 00:14:10.201 fused_ordering(758) 00:14:10.201 fused_ordering(759) 00:14:10.201 fused_ordering(760) 00:14:10.201 fused_ordering(761) 00:14:10.201 fused_ordering(762) 00:14:10.201 fused_ordering(763) 00:14:10.201 fused_ordering(764) 00:14:10.201 fused_ordering(765) 00:14:10.201 fused_ordering(766) 00:14:10.201 fused_ordering(767) 00:14:10.201 fused_ordering(768) 00:14:10.201 fused_ordering(769) 00:14:10.201 fused_ordering(770) 00:14:10.201 fused_ordering(771) 00:14:10.201 fused_ordering(772) 00:14:10.201 fused_ordering(773) 00:14:10.201 fused_ordering(774) 00:14:10.201 fused_ordering(775) 00:14:10.201 fused_ordering(776) 00:14:10.201 fused_ordering(777) 00:14:10.201 fused_ordering(778) 00:14:10.201 fused_ordering(779) 00:14:10.201 fused_ordering(780) 00:14:10.201 fused_ordering(781) 00:14:10.201 fused_ordering(782) 00:14:10.201 fused_ordering(783) 00:14:10.201 fused_ordering(784) 00:14:10.201 fused_ordering(785) 00:14:10.201 fused_ordering(786) 00:14:10.201 fused_ordering(787) 00:14:10.201 fused_ordering(788) 00:14:10.201 fused_ordering(789) 00:14:10.201 fused_ordering(790) 00:14:10.201 fused_ordering(791) 00:14:10.201 fused_ordering(792) 00:14:10.201 fused_ordering(793) 00:14:10.201 fused_ordering(794) 00:14:10.201 fused_ordering(795) 00:14:10.201 fused_ordering(796) 00:14:10.201 fused_ordering(797) 00:14:10.201 fused_ordering(798) 00:14:10.201 fused_ordering(799) 00:14:10.201 fused_ordering(800) 00:14:10.201 fused_ordering(801) 00:14:10.201 fused_ordering(802) 00:14:10.201 fused_ordering(803) 00:14:10.201 fused_ordering(804) 00:14:10.201 fused_ordering(805) 00:14:10.201 fused_ordering(806) 00:14:10.201 fused_ordering(807) 00:14:10.201 fused_ordering(808) 00:14:10.201 fused_ordering(809) 00:14:10.201 fused_ordering(810) 00:14:10.201 fused_ordering(811) 00:14:10.201 fused_ordering(812) 00:14:10.201 fused_ordering(813) 00:14:10.201 fused_ordering(814) 00:14:10.201 fused_ordering(815) 00:14:10.201 fused_ordering(816) 00:14:10.201 fused_ordering(817) 00:14:10.201 fused_ordering(818) 00:14:10.201 fused_ordering(819) 00:14:10.201 fused_ordering(820) 00:14:10.772 fused_ordering(821) 00:14:10.772 fused_ordering(822) 00:14:10.772 fused_ordering(823) 00:14:10.772 fused_ordering(824) 00:14:10.772 fused_ordering(825) 00:14:10.772 fused_ordering(826) 00:14:10.772 fused_ordering(827) 00:14:10.772 fused_ordering(828) 00:14:10.772 fused_ordering(829) 00:14:10.772 fused_ordering(830) 00:14:10.772 fused_ordering(831) 00:14:10.772 fused_ordering(832) 00:14:10.772 fused_ordering(833) 00:14:10.772 fused_ordering(834) 00:14:10.772 fused_ordering(835) 00:14:10.772 fused_ordering(836) 00:14:10.772 fused_ordering(837) 00:14:10.772 fused_ordering(838) 00:14:10.772 fused_ordering(839) 00:14:10.772 fused_ordering(840) 00:14:10.772 fused_ordering(841) 00:14:10.772 fused_ordering(842) 00:14:10.772 fused_ordering(843) 00:14:10.772 fused_ordering(844) 00:14:10.772 fused_ordering(845) 00:14:10.772 fused_ordering(846) 00:14:10.772 fused_ordering(847) 00:14:10.772 fused_ordering(848) 00:14:10.772 fused_ordering(849) 00:14:10.772 fused_ordering(850) 00:14:10.772 fused_ordering(851) 00:14:10.772 fused_ordering(852) 00:14:10.772 fused_ordering(853) 00:14:10.772 fused_ordering(854) 00:14:10.772 fused_ordering(855) 00:14:10.772 fused_ordering(856) 00:14:10.772 fused_ordering(857) 00:14:10.772 fused_ordering(858) 00:14:10.772 fused_ordering(859) 00:14:10.772 fused_ordering(860) 00:14:10.772 fused_ordering(861) 00:14:10.772 fused_ordering(862) 00:14:10.772 fused_ordering(863) 00:14:10.772 fused_ordering(864) 00:14:10.772 fused_ordering(865) 00:14:10.772 fused_ordering(866) 00:14:10.772 fused_ordering(867) 00:14:10.772 fused_ordering(868) 00:14:10.772 fused_ordering(869) 00:14:10.772 fused_ordering(870) 00:14:10.772 fused_ordering(871) 00:14:10.772 fused_ordering(872) 00:14:10.772 fused_ordering(873) 00:14:10.772 fused_ordering(874) 00:14:10.772 fused_ordering(875) 00:14:10.772 fused_ordering(876) 00:14:10.772 fused_ordering(877) 00:14:10.772 fused_ordering(878) 00:14:10.772 fused_ordering(879) 00:14:10.772 fused_ordering(880) 00:14:10.772 fused_ordering(881) 00:14:10.772 fused_ordering(882) 00:14:10.772 fused_ordering(883) 00:14:10.772 fused_ordering(884) 00:14:10.772 fused_ordering(885) 00:14:10.772 fused_ordering(886) 00:14:10.772 fused_ordering(887) 00:14:10.772 fused_ordering(888) 00:14:10.772 fused_ordering(889) 00:14:10.772 fused_ordering(890) 00:14:10.772 fused_ordering(891) 00:14:10.772 fused_ordering(892) 00:14:10.772 fused_ordering(893) 00:14:10.772 fused_ordering(894) 00:14:10.772 fused_ordering(895) 00:14:10.772 fused_ordering(896) 00:14:10.772 fused_ordering(897) 00:14:10.772 fused_ordering(898) 00:14:10.772 fused_ordering(899) 00:14:10.772 fused_ordering(900) 00:14:10.772 fused_ordering(901) 00:14:10.772 fused_ordering(902) 00:14:10.772 fused_ordering(903) 00:14:10.772 fused_ordering(904) 00:14:10.772 fused_ordering(905) 00:14:10.772 fused_ordering(906) 00:14:10.772 fused_ordering(907) 00:14:10.772 fused_ordering(908) 00:14:10.772 fused_ordering(909) 00:14:10.772 fused_ordering(910) 00:14:10.772 fused_ordering(911) 00:14:10.772 fused_ordering(912) 00:14:10.772 fused_ordering(913) 00:14:10.772 fused_ordering(914) 00:14:10.772 fused_ordering(915) 00:14:10.772 fused_ordering(916) 00:14:10.772 fused_ordering(917) 00:14:10.772 fused_ordering(918) 00:14:10.772 fused_ordering(919) 00:14:10.772 fused_ordering(920) 00:14:10.772 fused_ordering(921) 00:14:10.772 fused_ordering(922) 00:14:10.772 fused_ordering(923) 00:14:10.772 fused_ordering(924) 00:14:10.772 fused_ordering(925) 00:14:10.772 fused_ordering(926) 00:14:10.772 fused_ordering(927) 00:14:10.772 fused_ordering(928) 00:14:10.772 fused_ordering(929) 00:14:10.772 fused_ordering(930) 00:14:10.772 fused_ordering(931) 00:14:10.772 fused_ordering(932) 00:14:10.772 fused_ordering(933) 00:14:10.772 fused_ordering(934) 00:14:10.772 fused_ordering(935) 00:14:10.772 fused_ordering(936) 00:14:10.772 fused_ordering(937) 00:14:10.772 fused_ordering(938) 00:14:10.772 fused_ordering(939) 00:14:10.772 fused_ordering(940) 00:14:10.772 fused_ordering(941) 00:14:10.772 fused_ordering(942) 00:14:10.772 fused_ordering(943) 00:14:10.772 fused_ordering(944) 00:14:10.772 fused_ordering(945) 00:14:10.772 fused_ordering(946) 00:14:10.772 fused_ordering(947) 00:14:10.772 fused_ordering(948) 00:14:10.772 fused_ordering(949) 00:14:10.772 fused_ordering(950) 00:14:10.772 fused_ordering(951) 00:14:10.772 fused_ordering(952) 00:14:10.772 fused_ordering(953) 00:14:10.772 fused_ordering(954) 00:14:10.772 fused_ordering(955) 00:14:10.772 fused_ordering(956) 00:14:10.772 fused_ordering(957) 00:14:10.772 fused_ordering(958) 00:14:10.772 fused_ordering(959) 00:14:10.772 fused_ordering(960) 00:14:10.772 fused_ordering(961) 00:14:10.772 fused_ordering(962) 00:14:10.772 fused_ordering(963) 00:14:10.772 fused_ordering(964) 00:14:10.772 fused_ordering(965) 00:14:10.772 fused_ordering(966) 00:14:10.772 fused_ordering(967) 00:14:10.772 fused_ordering(968) 00:14:10.772 fused_ordering(969) 00:14:10.772 fused_ordering(970) 00:14:10.772 fused_ordering(971) 00:14:10.772 fused_ordering(972) 00:14:10.772 fused_ordering(973) 00:14:10.772 fused_ordering(974) 00:14:10.772 fused_ordering(975) 00:14:10.772 fused_ordering(976) 00:14:10.772 fused_ordering(977) 00:14:10.772 fused_ordering(978) 00:14:10.772 fused_ordering(979) 00:14:10.772 fused_ordering(980) 00:14:10.772 fused_ordering(981) 00:14:10.772 fused_ordering(982) 00:14:10.772 fused_ordering(983) 00:14:10.772 fused_ordering(984) 00:14:10.772 fused_ordering(985) 00:14:10.772 fused_ordering(986) 00:14:10.772 fused_ordering(987) 00:14:10.772 fused_ordering(988) 00:14:10.772 fused_ordering(989) 00:14:10.772 fused_ordering(990) 00:14:10.772 fused_ordering(991) 00:14:10.772 fused_ordering(992) 00:14:10.772 fused_ordering(993) 00:14:10.772 fused_ordering(994) 00:14:10.772 fused_ordering(995) 00:14:10.772 fused_ordering(996) 00:14:10.772 fused_ordering(997) 00:14:10.772 fused_ordering(998) 00:14:10.772 fused_ordering(999) 00:14:10.772 fused_ordering(1000) 00:14:10.772 fused_ordering(1001) 00:14:10.772 fused_ordering(1002) 00:14:10.772 fused_ordering(1003) 00:14:10.772 fused_ordering(1004) 00:14:10.772 fused_ordering(1005) 00:14:10.772 fused_ordering(1006) 00:14:10.772 fused_ordering(1007) 00:14:10.772 fused_ordering(1008) 00:14:10.772 fused_ordering(1009) 00:14:10.772 fused_ordering(1010) 00:14:10.772 fused_ordering(1011) 00:14:10.772 fused_ordering(1012) 00:14:10.772 fused_ordering(1013) 00:14:10.772 fused_ordering(1014) 00:14:10.772 fused_ordering(1015) 00:14:10.772 fused_ordering(1016) 00:14:10.772 fused_ordering(1017) 00:14:10.772 fused_ordering(1018) 00:14:10.772 fused_ordering(1019) 00:14:10.772 fused_ordering(1020) 00:14:10.772 fused_ordering(1021) 00:14:10.772 fused_ordering(1022) 00:14:10.772 fused_ordering(1023) 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.773 rmmod nvme_tcp 00:14:10.773 rmmod nvme_fabrics 00:14:10.773 rmmod nvme_keyring 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 326578 ']' 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 326578 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 326578 ']' 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 326578 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 326578 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 326578' 00:14:10.773 killing process with pid 326578 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 326578 00:14:10.773 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 326578 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.034 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.945 00:14:12.945 real 0m13.517s 00:14:12.945 user 0m7.172s 00:14:12.945 sys 0m7.268s 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.945 ************************************ 00:14:12.945 END TEST nvmf_fused_ordering 00:14:12.945 ************************************ 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:12.945 10:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.207 ************************************ 00:14:13.207 START TEST nvmf_ns_masking 00:14:13.207 ************************************ 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.207 * Looking for test storage... 00:14:13.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:13.207 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:13.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.208 --rc genhtml_branch_coverage=1 00:14:13.208 --rc genhtml_function_coverage=1 00:14:13.208 --rc genhtml_legend=1 00:14:13.208 --rc geninfo_all_blocks=1 00:14:13.208 --rc geninfo_unexecuted_blocks=1 00:14:13.208 00:14:13.208 ' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:13.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.208 --rc genhtml_branch_coverage=1 00:14:13.208 --rc genhtml_function_coverage=1 00:14:13.208 --rc genhtml_legend=1 00:14:13.208 --rc geninfo_all_blocks=1 00:14:13.208 --rc geninfo_unexecuted_blocks=1 00:14:13.208 00:14:13.208 ' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:13.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.208 --rc genhtml_branch_coverage=1 00:14:13.208 --rc genhtml_function_coverage=1 00:14:13.208 --rc genhtml_legend=1 00:14:13.208 --rc geninfo_all_blocks=1 00:14:13.208 --rc geninfo_unexecuted_blocks=1 00:14:13.208 00:14:13.208 ' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:13.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.208 --rc genhtml_branch_coverage=1 00:14:13.208 --rc genhtml_function_coverage=1 00:14:13.208 --rc genhtml_legend=1 00:14:13.208 --rc geninfo_all_blocks=1 00:14:13.208 --rc geninfo_unexecuted_blocks=1 00:14:13.208 00:14:13.208 ' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:13.208 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=369b73b8-0820-4efd-8838-c3575c888a1e 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a65f7116-c8ab-4de9-9cd2-f4b4e3bc909e 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=927f7b43-8d59-4816-bf19-eeba83e7d54a 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:13.469 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:13.470 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:13.470 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:21.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:21.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:21.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.618 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:21.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.619 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:21.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:14:21.619 00:14:21.619 --- 10.0.0.2 ping statistics --- 00:14:21.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.619 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:14:21.619 00:14:21.619 --- 10.0.0.1 ping statistics --- 00:14:21.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.619 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=331367 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 331367 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 331367 ']' 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.619 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.619 [2024-11-15 10:54:40.277177] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:14:21.619 [2024-11-15 10:54:40.277247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.619 [2024-11-15 10:54:40.376722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.619 [2024-11-15 10:54:40.428029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.619 [2024-11-15 10:54:40.428080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.619 [2024-11-15 10:54:40.428089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.619 [2024-11-15 10:54:40.428096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.619 [2024-11-15 10:54:40.428102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.619 [2024-11-15 10:54:40.428884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.619 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.881 [2024-11-15 10:54:41.297285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.881 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:21.881 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:21.881 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:22.141 Malloc1 00:14:22.141 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:22.402 Malloc2 00:14:22.402 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:22.662 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:22.662 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.923 [2024-11-15 10:54:42.337258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.923 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:22.923 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 927f7b43-8d59-4816-bf19-eeba83e7d54a -a 10.0.0.2 -s 4420 -i 4 00:14:23.183 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.183 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:23.183 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.183 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:23.183 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.093 [ 0]:0x1 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.093 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0eb41e18315c4fdfab2216b923eadd0d 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0eb41e18315c4fdfab2216b923eadd0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.353 [ 0]:0x1 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0eb41e18315c4fdfab2216b923eadd0d 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0eb41e18315c4fdfab2216b923eadd0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.353 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.612 [ 1]:0x2 00:14:25.612 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.612 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.612 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:25.612 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.612 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:25.612 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.871 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.871 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:26.131 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:26.131 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 927f7b43-8d59-4816-bf19-eeba83e7d54a -a 10.0.0.2 -s 4420 -i 4 00:14:26.392 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:26.392 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:26.392 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.392 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:26.392 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:26.392 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:28.300 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.560 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.560 [ 0]:0x2 00:14:28.560 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.560 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.560 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:28.560 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.560 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.820 [ 0]:0x1 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0eb41e18315c4fdfab2216b923eadd0d 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0eb41e18315c4fdfab2216b923eadd0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.820 [ 1]:0x2 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.820 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.081 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.341 [ 0]:0x2 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.341 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.600 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:29.600 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 927f7b43-8d59-4816-bf19-eeba83e7d54a -a 10.0.0.2 -s 4420 -i 4 00:14:29.601 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:29.601 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:29.601 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.601 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:29.601 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:29.601 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:31.508 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:31.508 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.768 [ 0]:0x1 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0eb41e18315c4fdfab2216b923eadd0d 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0eb41e18315c4fdfab2216b923eadd0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.768 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.769 [ 1]:0x2 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.769 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.027 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.028 [ 0]:0x2 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.028 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.335 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.336 [2024-11-15 10:54:51.731178] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:32.336 request: 00:14:32.336 { 00:14:32.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.336 "nsid": 2, 00:14:32.336 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.336 "method": "nvmf_ns_remove_host", 00:14:32.336 "req_id": 1 00:14:32.336 } 00:14:32.336 Got JSON-RPC error response 00:14:32.336 response: 00:14:32.336 { 00:14:32.336 "code": -32602, 00:14:32.336 "message": "Invalid parameters" 00:14:32.336 } 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.336 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.596 [ 0]:0x2 00:14:32.596 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.596 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.596 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c84b0e55949741299a449171fd440a1c 00:14:32.596 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c84b0e55949741299a449171fd440a1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.596 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:32.596 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=333782 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 333782 /var/tmp/host.sock 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 333782 ']' 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:32.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:32.596 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.596 [2024-11-15 10:54:52.100334] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:14:32.596 [2024-11-15 10:54:52.100384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333782 ] 00:14:32.856 [2024-11-15 10:54:52.188197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.856 [2024-11-15 10:54:52.224317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.427 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:33.427 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:33.427 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.687 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.687 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 369b73b8-0820-4efd-8838-c3575c888a1e 00:14:33.687 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.687 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 369B73B808204EFD8838C3575C888A1E -i 00:14:33.947 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a65f7116-c8ab-4de9-9cd2-f4b4e3bc909e 00:14:33.947 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.947 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A65F7116C8AB4DE99CD2F4B4E3BC909E -i 00:14:34.208 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.469 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:34.469 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:34.469 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:34.729 nvme0n1 00:14:34.729 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:34.729 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:34.989 nvme1n2 00:14:34.989 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:34.989 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:34.989 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:34.989 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:34.989 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:35.249 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:35.249 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:35.249 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:35.249 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:35.511 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 369b73b8-0820-4efd-8838-c3575c888a1e == \3\6\9\b\7\3\b\8\-\0\8\2\0\-\4\e\f\d\-\8\8\3\8\-\c\3\5\7\5\c\8\8\8\a\1\e ]] 00:14:35.511 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:35.511 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:35.511 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:35.771 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a65f7116-c8ab-4de9-9cd2-f4b4e3bc909e == \a\6\5\f\7\1\1\6\-\c\8\a\b\-\4\d\e\9\-\9\c\d\2\-\f\4\b\4\e\3\b\c\9\0\9\e ]] 00:14:35.771 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.771 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:36.032 [2024-11-15 10:54:55.353117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:2 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.032 [2024-11-15 10:54:55.353151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:36.032 [2024-11-15 10:54:55.353165] nvme_ns.c: 287:nvme_ctrlr_identify_id_desc: *WARNING*: Failed to retrieve NS ID Descriptor List 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 369b73b8-0820-4efd-8838-c3575c888a1e 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 369B73B808204EFD8838C3575C888A1E 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 369B73B808204EFD8838C3575C888A1E 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:36.032 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 369B73B808204EFD8838C3575C888A1E 00:14:36.032 [2024-11-15 10:54:55.537156] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:36.032 [2024-11-15 10:54:55.537182] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:36.032 [2024-11-15 10:54:55.537189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.032 request: 00:14:36.032 { 00:14:36.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.032 "namespace": { 00:14:36.032 "bdev_name": "invalid", 00:14:36.032 "nsid": 1, 00:14:36.032 "nguid": "369B73B808204EFD8838C3575C888A1E", 00:14:36.032 "no_auto_visible": false, 00:14:36.032 "no_metadata": false 00:14:36.032 }, 00:14:36.032 "method": "nvmf_subsystem_add_ns", 00:14:36.032 "req_id": 1 00:14:36.032 } 00:14:36.032 Got JSON-RPC error response 00:14:36.032 response: 00:14:36.032 { 00:14:36.032 "code": -32602, 00:14:36.032 "message": "Invalid parameters" 00:14:36.032 } 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 369b73b8-0820-4efd-8838-c3575c888a1e 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 369B73B808204EFD8838C3575C888A1E -i 00:14:36.293 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 333782 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 333782 ']' 00:14:38.836 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 333782 00:14:38.837 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:38.837 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:38.837 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 333782 00:14:38.837 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:38.837 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:38.837 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 333782' 00:14:38.837 killing process with pid 333782 00:14:38.837 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 333782 00:14:38.837 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 333782 00:14:38.837 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.097 rmmod nvme_tcp 00:14:39.097 rmmod nvme_fabrics 00:14:39.097 rmmod nvme_keyring 00:14:39.097 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 331367 ']' 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 331367 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 331367 ']' 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 331367 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 331367 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 331367' 00:14:39.098 killing process with pid 331367 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 331367 00:14:39.098 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 331367 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:39.358 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.359 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.359 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.272 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.273 00:14:41.273 real 0m28.225s 00:14:41.273 user 0m31.944s 00:14:41.273 sys 0m8.267s 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.273 ************************************ 00:14:41.273 END TEST nvmf_ns_masking 00:14:41.273 ************************************ 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:41.273 10:55:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.533 ************************************ 00:14:41.533 START TEST nvmf_nvme_cli 00:14:41.533 ************************************ 00:14:41.533 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.533 * Looking for test storage... 00:14:41.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.534 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.534 --rc genhtml_branch_coverage=1 00:14:41.534 --rc genhtml_function_coverage=1 00:14:41.534 --rc genhtml_legend=1 00:14:41.534 --rc geninfo_all_blocks=1 00:14:41.534 --rc geninfo_unexecuted_blocks=1 00:14:41.534 00:14:41.534 ' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.534 --rc genhtml_branch_coverage=1 00:14:41.534 --rc genhtml_function_coverage=1 00:14:41.534 --rc genhtml_legend=1 00:14:41.534 --rc geninfo_all_blocks=1 00:14:41.534 --rc geninfo_unexecuted_blocks=1 00:14:41.534 00:14:41.534 ' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.534 --rc genhtml_branch_coverage=1 00:14:41.534 --rc genhtml_function_coverage=1 00:14:41.534 --rc genhtml_legend=1 00:14:41.534 --rc geninfo_all_blocks=1 00:14:41.534 --rc geninfo_unexecuted_blocks=1 00:14:41.534 00:14:41.534 ' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.534 --rc genhtml_branch_coverage=1 00:14:41.534 --rc genhtml_function_coverage=1 00:14:41.534 --rc genhtml_legend=1 00:14:41.534 --rc geninfo_all_blocks=1 00:14:41.534 --rc geninfo_unexecuted_blocks=1 00:14:41.534 00:14:41.534 ' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:41.534 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.535 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:49.674 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:49.674 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.674 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:49.675 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:49.675 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:14:49.675 00:14:49.675 --- 10.0.0.2 ping statistics --- 00:14:49.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.675 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:14:49.675 00:14:49.675 --- 10.0.0.1 ping statistics --- 00:14:49.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.675 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=339412 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 339412 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 339412 ']' 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:49.675 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.675 [2024-11-15 10:55:08.636353] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:14:49.675 [2024-11-15 10:55:08.636419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.676 [2024-11-15 10:55:08.736322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.676 [2024-11-15 10:55:08.790352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.676 [2024-11-15 10:55:08.790431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.676 [2024-11-15 10:55:08.790440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.676 [2024-11-15 10:55:08.790448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.676 [2024-11-15 10:55:08.790454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.676 [2024-11-15 10:55:08.792885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.676 [2024-11-15 10:55:08.793030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.676 [2024-11-15 10:55:08.793197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.676 [2024-11-15 10:55:08.793198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.936 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:49.936 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:49.936 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.936 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.936 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 [2024-11-15 10:55:09.514751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 Malloc0 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 Malloc1 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 [2024-11-15 10:55:09.624943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.198 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:50.460 00:14:50.460 Discovery Log Number of Records 2, Generation counter 2 00:14:50.460 =====Discovery Log Entry 0====== 00:14:50.460 trtype: tcp 00:14:50.460 adrfam: ipv4 00:14:50.460 subtype: current discovery subsystem 00:14:50.460 treq: not required 00:14:50.460 portid: 0 00:14:50.460 trsvcid: 4420 00:14:50.460 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:50.460 traddr: 10.0.0.2 00:14:50.460 eflags: explicit discovery connections, duplicate discovery information 00:14:50.460 sectype: none 00:14:50.460 =====Discovery Log Entry 1====== 00:14:50.460 trtype: tcp 00:14:50.460 adrfam: ipv4 00:14:50.460 subtype: nvme subsystem 00:14:50.460 treq: not required 00:14:50.460 portid: 0 00:14:50.460 trsvcid: 4420 00:14:50.460 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:50.460 traddr: 10.0.0.2 00:14:50.460 eflags: none 00:14:50.460 sectype: none 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:50.460 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:52.368 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:52.368 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:52.368 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.368 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:52.368 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:52.368 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:54.402 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:54.403 /dev/nvme0n2 ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.403 rmmod nvme_tcp 00:14:54.403 rmmod nvme_fabrics 00:14:54.403 rmmod nvme_keyring 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 339412 ']' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 339412 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 339412 ']' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 339412 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 339412 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 339412' 00:14:54.403 killing process with pid 339412 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 339412 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 339412 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.403 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.965 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.965 00:14:56.965 real 0m15.180s 00:14:56.965 user 0m22.698s 00:14:56.965 sys 0m6.354s 00:14:56.965 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:56.965 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.965 ************************************ 00:14:56.965 END TEST nvmf_nvme_cli 00:14:56.965 ************************************ 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.965 ************************************ 00:14:56.965 START TEST nvmf_vfio_user 00:14:56.965 ************************************ 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.965 * Looking for test storage... 00:14:56.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:56.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.965 --rc genhtml_branch_coverage=1 00:14:56.965 --rc genhtml_function_coverage=1 00:14:56.965 --rc genhtml_legend=1 00:14:56.965 --rc geninfo_all_blocks=1 00:14:56.965 --rc geninfo_unexecuted_blocks=1 00:14:56.965 00:14:56.965 ' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:56.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.965 --rc genhtml_branch_coverage=1 00:14:56.965 --rc genhtml_function_coverage=1 00:14:56.965 --rc genhtml_legend=1 00:14:56.965 --rc geninfo_all_blocks=1 00:14:56.965 --rc geninfo_unexecuted_blocks=1 00:14:56.965 00:14:56.965 ' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:56.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.965 --rc genhtml_branch_coverage=1 00:14:56.965 --rc genhtml_function_coverage=1 00:14:56.965 --rc genhtml_legend=1 00:14:56.965 --rc geninfo_all_blocks=1 00:14:56.965 --rc geninfo_unexecuted_blocks=1 00:14:56.965 00:14:56.965 ' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:56.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.965 --rc genhtml_branch_coverage=1 00:14:56.965 --rc genhtml_function_coverage=1 00:14:56.965 --rc genhtml_legend=1 00:14:56.965 --rc geninfo_all_blocks=1 00:14:56.965 --rc geninfo_unexecuted_blocks=1 00:14:56.965 00:14:56.965 ' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=340994 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 340994' 00:14:56.965 Process pid: 340994 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 340994 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 340994 ']' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:56.965 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.965 [2024-11-15 10:55:16.361698] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:14:56.965 [2024-11-15 10:55:16.361778] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.965 [2024-11-15 10:55:16.450815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.965 [2024-11-15 10:55:16.485101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.965 [2024-11-15 10:55:16.485131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.965 [2024-11-15 10:55:16.485137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.965 [2024-11-15 10:55:16.485142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.965 [2024-11-15 10:55:16.485146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.965 [2024-11-15 10:55:16.486483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.965 [2024-11-15 10:55:16.486609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.965 [2024-11-15 10:55:16.486696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.965 [2024-11-15 10:55:16.486694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.903 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:57.903 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:57.904 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.843 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:58.843 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.843 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.843 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.843 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:59.103 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.103 Malloc1 00:14:59.103 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:59.363 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.623 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.623 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.623 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.623 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.883 Malloc2 00:14:59.883 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:00.143 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:00.143 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.403 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:00.404 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:00.404 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.404 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.404 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.404 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:00.404 [2024-11-15 10:55:19.870486] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:15:00.404 [2024-11-15 10:55:19.870531] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341684 ] 00:15:00.404 [2024-11-15 10:55:19.911868] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:00.404 [2024-11-15 10:55:19.918867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.404 [2024-11-15 10:55:19.918886] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1578da1000 00:15:00.404 [2024-11-15 10:55:19.919871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.920873] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.921871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.922880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.923887] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.924897] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.925904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.926912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.404 [2024-11-15 10:55:19.927925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.404 [2024-11-15 10:55:19.927932] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1578d96000 00:15:00.404 [2024-11-15 10:55:19.928844] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.666 [2024-11-15 10:55:19.940838] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:00.666 [2024-11-15 10:55:19.940862] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:00.666 [2024-11-15 10:55:19.946032] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.666 [2024-11-15 10:55:19.946067] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.666 [2024-11-15 10:55:19.946130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:00.666 [2024-11-15 10:55:19.946142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:00.666 [2024-11-15 10:55:19.946146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:00.666 [2024-11-15 10:55:19.947037] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:00.666 [2024-11-15 10:55:19.947045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:00.666 [2024-11-15 10:55:19.947050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:00.666 [2024-11-15 10:55:19.948040] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.666 [2024-11-15 10:55:19.948046] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:00.666 [2024-11-15 10:55:19.948052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.666 [2024-11-15 10:55:19.949047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:00.667 [2024-11-15 10:55:19.949054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.667 [2024-11-15 10:55:19.950051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:00.667 [2024-11-15 10:55:19.950057] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:00.667 [2024-11-15 10:55:19.950061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:00.667 [2024-11-15 10:55:19.950066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.667 [2024-11-15 10:55:19.950171] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:00.667 [2024-11-15 10:55:19.950175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.667 [2024-11-15 10:55:19.950179] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:00.667 [2024-11-15 10:55:19.951058] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:00.667 [2024-11-15 10:55:19.952068] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:00.667 [2024-11-15 10:55:19.953073] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.667 [2024-11-15 10:55:19.954067] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.667 [2024-11-15 10:55:19.954117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.667 [2024-11-15 10:55:19.955082] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:00.667 [2024-11-15 10:55:19.955088] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.667 [2024-11-15 10:55:19.955092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955106] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:00.667 [2024-11-15 10:55:19.955112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955122] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.667 [2024-11-15 10:55:19.955126] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.667 [2024-11-15 10:55:19.955129] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.667 [2024-11-15 10:55:19.955139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955179] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:00.667 [2024-11-15 10:55:19.955183] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:00.667 [2024-11-15 10:55:19.955186] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:00.667 [2024-11-15 10:55:19.955190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.667 [2024-11-15 10:55:19.955195] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:00.667 [2024-11-15 10:55:19.955198] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:00.667 [2024-11-15 10:55:19.955201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.667 [2024-11-15 10:55:19.955240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.667 [2024-11-15 10:55:19.955246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.667 [2024-11-15 10:55:19.955252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.667 [2024-11-15 10:55:19.955255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955283] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:00.667 [2024-11-15 10:55:19.955287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955368] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:00.667 [2024-11-15 10:55:19.955371] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:00.667 [2024-11-15 10:55:19.955373] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.667 [2024-11-15 10:55:19.955378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955395] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:00.667 [2024-11-15 10:55:19.955407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955417] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.667 [2024-11-15 10:55:19.955420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.667 [2024-11-15 10:55:19.955423] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.667 [2024-11-15 10:55:19.955427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955461] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.667 [2024-11-15 10:55:19.955464] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.667 [2024-11-15 10:55:19.955467] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.667 [2024-11-15 10:55:19.955472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.667 [2024-11-15 10:55:19.955484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:00.667 [2024-11-15 10:55:19.955490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:00.667 [2024-11-15 10:55:19.955516] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.667 [2024-11-15 10:55:19.955519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:00.668 [2024-11-15 10:55:19.955523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:00.668 [2024-11-15 10:55:19.955537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955605] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:00.668 [2024-11-15 10:55:19.955608] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:00.668 [2024-11-15 10:55:19.955611] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:00.668 [2024-11-15 10:55:19.955613] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:00.668 [2024-11-15 10:55:19.955617] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:00.668 [2024-11-15 10:55:19.955622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:00.668 [2024-11-15 10:55:19.955627] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:00.668 [2024-11-15 10:55:19.955630] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:00.668 [2024-11-15 10:55:19.955633] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.668 [2024-11-15 10:55:19.955637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955643] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:00.668 [2024-11-15 10:55:19.955646] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.668 [2024-11-15 10:55:19.955648] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.668 [2024-11-15 10:55:19.955653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955659] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:00.668 [2024-11-15 10:55:19.955662] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:00.668 [2024-11-15 10:55:19.955664] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.668 [2024-11-15 10:55:19.955668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:00.668 [2024-11-15 10:55:19.955674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:00.668 [2024-11-15 10:55:19.955695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:00.668 ===================================================== 00:15:00.668 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.668 ===================================================== 00:15:00.668 Controller Capabilities/Features 00:15:00.668 ================================ 00:15:00.668 Vendor ID: 4e58 00:15:00.668 Subsystem Vendor ID: 4e58 00:15:00.668 Serial Number: SPDK1 00:15:00.668 Model Number: SPDK bdev Controller 00:15:00.668 Firmware Version: 25.01 00:15:00.668 Recommended Arb Burst: 6 00:15:00.668 IEEE OUI Identifier: 8d 6b 50 00:15:00.668 Multi-path I/O 00:15:00.668 May have multiple subsystem ports: Yes 00:15:00.668 May have multiple controllers: Yes 00:15:00.668 Associated with SR-IOV VF: No 00:15:00.668 Max Data Transfer Size: 131072 00:15:00.668 Max Number of Namespaces: 32 00:15:00.668 Max Number of I/O Queues: 127 00:15:00.668 NVMe Specification Version (VS): 1.3 00:15:00.668 NVMe Specification Version (Identify): 1.3 00:15:00.668 Maximum Queue Entries: 256 00:15:00.668 Contiguous Queues Required: Yes 00:15:00.668 Arbitration Mechanisms Supported 00:15:00.668 Weighted Round Robin: Not Supported 00:15:00.668 Vendor Specific: Not Supported 00:15:00.668 Reset Timeout: 15000 ms 00:15:00.668 Doorbell Stride: 4 bytes 00:15:00.668 NVM Subsystem Reset: Not Supported 00:15:00.668 Command Sets Supported 00:15:00.668 NVM Command Set: Supported 00:15:00.668 Boot Partition: Not Supported 00:15:00.668 Memory Page Size Minimum: 4096 bytes 00:15:00.668 Memory Page Size Maximum: 4096 bytes 00:15:00.668 Persistent Memory Region: Not Supported 00:15:00.668 Optional Asynchronous Events Supported 00:15:00.668 Namespace Attribute Notices: Supported 00:15:00.668 Firmware Activation Notices: Not Supported 00:15:00.668 ANA Change Notices: Not Supported 00:15:00.668 PLE Aggregate Log Change Notices: Not Supported 00:15:00.668 LBA Status Info Alert Notices: Not Supported 00:15:00.668 EGE Aggregate Log Change Notices: Not Supported 00:15:00.668 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.668 Zone Descriptor Change Notices: Not Supported 00:15:00.668 Discovery Log Change Notices: Not Supported 00:15:00.668 Controller Attributes 00:15:00.668 128-bit Host Identifier: Supported 00:15:00.668 Non-Operational Permissive Mode: Not Supported 00:15:00.668 NVM Sets: Not Supported 00:15:00.668 Read Recovery Levels: Not Supported 00:15:00.668 Endurance Groups: Not Supported 00:15:00.668 Predictable Latency Mode: Not Supported 00:15:00.668 Traffic Based Keep ALive: Not Supported 00:15:00.668 Namespace Granularity: Not Supported 00:15:00.668 SQ Associations: Not Supported 00:15:00.668 UUID List: Not Supported 00:15:00.668 Multi-Domain Subsystem: Not Supported 00:15:00.668 Fixed Capacity Management: Not Supported 00:15:00.668 Variable Capacity Management: Not Supported 00:15:00.668 Delete Endurance Group: Not Supported 00:15:00.668 Delete NVM Set: Not Supported 00:15:00.668 Extended LBA Formats Supported: Not Supported 00:15:00.668 Flexible Data Placement Supported: Not Supported 00:15:00.668 00:15:00.668 Controller Memory Buffer Support 00:15:00.668 ================================ 00:15:00.668 Supported: No 00:15:00.668 00:15:00.668 Persistent Memory Region Support 00:15:00.668 ================================ 00:15:00.668 Supported: No 00:15:00.668 00:15:00.668 Admin Command Set Attributes 00:15:00.668 ============================ 00:15:00.668 Security Send/Receive: Not Supported 00:15:00.668 Format NVM: Not Supported 00:15:00.668 Firmware Activate/Download: Not Supported 00:15:00.668 Namespace Management: Not Supported 00:15:00.668 Device Self-Test: Not Supported 00:15:00.668 Directives: Not Supported 00:15:00.668 NVMe-MI: Not Supported 00:15:00.668 Virtualization Management: Not Supported 00:15:00.668 Doorbell Buffer Config: Not Supported 00:15:00.668 Get LBA Status Capability: Not Supported 00:15:00.668 Command & Feature Lockdown Capability: Not Supported 00:15:00.668 Abort Command Limit: 4 00:15:00.668 Async Event Request Limit: 4 00:15:00.668 Number of Firmware Slots: N/A 00:15:00.668 Firmware Slot 1 Read-Only: N/A 00:15:00.668 Firmware Activation Without Reset: N/A 00:15:00.668 Multiple Update Detection Support: N/A 00:15:00.668 Firmware Update Granularity: No Information Provided 00:15:00.668 Per-Namespace SMART Log: No 00:15:00.668 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.668 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:00.668 Command Effects Log Page: Supported 00:15:00.668 Get Log Page Extended Data: Supported 00:15:00.668 Telemetry Log Pages: Not Supported 00:15:00.668 Persistent Event Log Pages: Not Supported 00:15:00.668 Supported Log Pages Log Page: May Support 00:15:00.668 Commands Supported & Effects Log Page: Not Supported 00:15:00.668 Feature Identifiers & Effects Log Page:May Support 00:15:00.668 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.668 Data Area 4 for Telemetry Log: Not Supported 00:15:00.668 Error Log Page Entries Supported: 128 00:15:00.668 Keep Alive: Supported 00:15:00.668 Keep Alive Granularity: 10000 ms 00:15:00.668 00:15:00.668 NVM Command Set Attributes 00:15:00.668 ========================== 00:15:00.668 Submission Queue Entry Size 00:15:00.668 Max: 64 00:15:00.668 Min: 64 00:15:00.668 Completion Queue Entry Size 00:15:00.668 Max: 16 00:15:00.668 Min: 16 00:15:00.668 Number of Namespaces: 32 00:15:00.668 Compare Command: Supported 00:15:00.668 Write Uncorrectable Command: Not Supported 00:15:00.668 Dataset Management Command: Supported 00:15:00.668 Write Zeroes Command: Supported 00:15:00.668 Set Features Save Field: Not Supported 00:15:00.668 Reservations: Not Supported 00:15:00.669 Timestamp: Not Supported 00:15:00.669 Copy: Supported 00:15:00.669 Volatile Write Cache: Present 00:15:00.669 Atomic Write Unit (Normal): 1 00:15:00.669 Atomic Write Unit (PFail): 1 00:15:00.669 Atomic Compare & Write Unit: 1 00:15:00.669 Fused Compare & Write: Supported 00:15:00.669 Scatter-Gather List 00:15:00.669 SGL Command Set: Supported (Dword aligned) 00:15:00.669 SGL Keyed: Not Supported 00:15:00.669 SGL Bit Bucket Descriptor: Not Supported 00:15:00.669 SGL Metadata Pointer: Not Supported 00:15:00.669 Oversized SGL: Not Supported 00:15:00.669 SGL Metadata Address: Not Supported 00:15:00.669 SGL Offset: Not Supported 00:15:00.669 Transport SGL Data Block: Not Supported 00:15:00.669 Replay Protected Memory Block: Not Supported 00:15:00.669 00:15:00.669 Firmware Slot Information 00:15:00.669 ========================= 00:15:00.669 Active slot: 1 00:15:00.669 Slot 1 Firmware Revision: 25.01 00:15:00.669 00:15:00.669 00:15:00.669 Commands Supported and Effects 00:15:00.669 ============================== 00:15:00.669 Admin Commands 00:15:00.669 -------------- 00:15:00.669 Get Log Page (02h): Supported 00:15:00.669 Identify (06h): Supported 00:15:00.669 Abort (08h): Supported 00:15:00.669 Set Features (09h): Supported 00:15:00.669 Get Features (0Ah): Supported 00:15:00.669 Asynchronous Event Request (0Ch): Supported 00:15:00.669 Keep Alive (18h): Supported 00:15:00.669 I/O Commands 00:15:00.669 ------------ 00:15:00.669 Flush (00h): Supported LBA-Change 00:15:00.669 Write (01h): Supported LBA-Change 00:15:00.669 Read (02h): Supported 00:15:00.669 Compare (05h): Supported 00:15:00.669 Write Zeroes (08h): Supported LBA-Change 00:15:00.669 Dataset Management (09h): Supported LBA-Change 00:15:00.669 Copy (19h): Supported LBA-Change 00:15:00.669 00:15:00.669 Error Log 00:15:00.669 ========= 00:15:00.669 00:15:00.669 Arbitration 00:15:00.669 =========== 00:15:00.669 Arbitration Burst: 1 00:15:00.669 00:15:00.669 Power Management 00:15:00.669 ================ 00:15:00.669 Number of Power States: 1 00:15:00.669 Current Power State: Power State #0 00:15:00.669 Power State #0: 00:15:00.669 Max Power: 0.00 W 00:15:00.669 Non-Operational State: Operational 00:15:00.669 Entry Latency: Not Reported 00:15:00.669 Exit Latency: Not Reported 00:15:00.669 Relative Read Throughput: 0 00:15:00.669 Relative Read Latency: 0 00:15:00.669 Relative Write Throughput: 0 00:15:00.669 Relative Write Latency: 0 00:15:00.669 Idle Power: Not Reported 00:15:00.669 Active Power: Not Reported 00:15:00.669 Non-Operational Permissive Mode: Not Supported 00:15:00.669 00:15:00.669 Health Information 00:15:00.669 ================== 00:15:00.669 Critical Warnings: 00:15:00.669 Available Spare Space: OK 00:15:00.669 Temperature: OK 00:15:00.669 Device Reliability: OK 00:15:00.669 Read Only: No 00:15:00.669 Volatile Memory Backup: OK 00:15:00.669 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:00.669 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.669 Available Spare: 0% 00:15:00.669 Available Sp[2024-11-15 10:55:19.955773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:00.669 [2024-11-15 10:55:19.955783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:00.669 [2024-11-15 10:55:19.955803] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:00.669 [2024-11-15 10:55:19.955810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.669 [2024-11-15 10:55:19.955815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.669 [2024-11-15 10:55:19.955819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.669 [2024-11-15 10:55:19.955824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.669 [2024-11-15 10:55:19.956092] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.669 [2024-11-15 10:55:19.956100] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:00.669 [2024-11-15 10:55:19.957097] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.669 [2024-11-15 10:55:19.957140] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:00.669 [2024-11-15 10:55:19.957145] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:00.669 [2024-11-15 10:55:19.958110] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:00.669 [2024-11-15 10:55:19.958118] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:00.669 [2024-11-15 10:55:19.958168] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:00.669 [2024-11-15 10:55:19.959135] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.669 are Threshold: 0% 00:15:00.669 Life Percentage Used: 0% 00:15:00.669 Data Units Read: 0 00:15:00.669 Data Units Written: 0 00:15:00.669 Host Read Commands: 0 00:15:00.669 Host Write Commands: 0 00:15:00.669 Controller Busy Time: 0 minutes 00:15:00.669 Power Cycles: 0 00:15:00.669 Power On Hours: 0 hours 00:15:00.669 Unsafe Shutdowns: 0 00:15:00.669 Unrecoverable Media Errors: 0 00:15:00.669 Lifetime Error Log Entries: 0 00:15:00.669 Warning Temperature Time: 0 minutes 00:15:00.669 Critical Temperature Time: 0 minutes 00:15:00.669 00:15:00.669 Number of Queues 00:15:00.669 ================ 00:15:00.669 Number of I/O Submission Queues: 127 00:15:00.669 Number of I/O Completion Queues: 127 00:15:00.669 00:15:00.669 Active Namespaces 00:15:00.669 ================= 00:15:00.669 Namespace ID:1 00:15:00.669 Error Recovery Timeout: Unlimited 00:15:00.669 Command Set Identifier: NVM (00h) 00:15:00.669 Deallocate: Supported 00:15:00.669 Deallocated/Unwritten Error: Not Supported 00:15:00.669 Deallocated Read Value: Unknown 00:15:00.669 Deallocate in Write Zeroes: Not Supported 00:15:00.669 Deallocated Guard Field: 0xFFFF 00:15:00.669 Flush: Supported 00:15:00.669 Reservation: Supported 00:15:00.669 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.669 Size (in LBAs): 131072 (0GiB) 00:15:00.669 Capacity (in LBAs): 131072 (0GiB) 00:15:00.669 Utilization (in LBAs): 131072 (0GiB) 00:15:00.669 NGUID: 4E0FA8D2906E49B384DE9D4BF44937C8 00:15:00.669 UUID: 4e0fa8d2-906e-49b3-84de-9d4bf44937c8 00:15:00.669 Thin Provisioning: Not Supported 00:15:00.669 Per-NS Atomic Units: Yes 00:15:00.669 Atomic Boundary Size (Normal): 0 00:15:00.669 Atomic Boundary Size (PFail): 0 00:15:00.669 Atomic Boundary Offset: 0 00:15:00.669 Maximum Single Source Range Length: 65535 00:15:00.669 Maximum Copy Length: 65535 00:15:00.669 Maximum Source Range Count: 1 00:15:00.669 NGUID/EUI64 Never Reused: No 00:15:00.669 Namespace Write Protected: No 00:15:00.669 Number of LBA Formats: 1 00:15:00.669 Current LBA Format: LBA Format #00 00:15:00.669 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.669 00:15:00.669 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.669 [2024-11-15 10:55:20.157288] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.953 Initializing NVMe Controllers 00:15:05.953 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.953 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:05.953 Initialization complete. Launching workers. 00:15:05.953 ======================================================== 00:15:05.953 Latency(us) 00:15:05.953 Device Information : IOPS MiB/s Average min max 00:15:05.953 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40142.60 156.81 3189.08 848.89 6897.89 00:15:05.953 ======================================================== 00:15:05.953 Total : 40142.60 156.81 3189.08 848.89 6897.89 00:15:05.953 00:15:05.954 [2024-11-15 10:55:25.178557] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.954 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.954 [2024-11-15 10:55:25.374435] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.234 Initializing NVMe Controllers 00:15:11.234 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:11.234 Initialization complete. Launching workers. 00:15:11.234 ======================================================== 00:15:11.234 Latency(us) 00:15:11.234 Device Information : IOPS MiB/s Average min max 00:15:11.234 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.96 62.72 7977.66 5986.47 9969.80 00:15:11.234 ======================================================== 00:15:11.234 Total : 16055.96 62.72 7977.66 5986.47 9969.80 00:15:11.234 00:15:11.234 [2024-11-15 10:55:30.416712] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.234 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.234 [2024-11-15 10:55:30.624609] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.516 [2024-11-15 10:55:35.698752] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.516 Initializing NVMe Controllers 00:15:16.516 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.516 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:16.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:16.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:16.516 Initialization complete. Launching workers. 00:15:16.516 Starting thread on core 2 00:15:16.516 Starting thread on core 3 00:15:16.516 Starting thread on core 1 00:15:16.516 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:16.516 [2024-11-15 10:55:35.948017] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.813 [2024-11-15 10:55:39.009741] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.813 Initializing NVMe Controllers 00:15:19.813 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.813 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.813 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:19.813 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:19.813 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:19.813 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:19.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.813 Initialization complete. Launching workers. 00:15:19.813 Starting thread on core 1 with urgent priority queue 00:15:19.813 Starting thread on core 2 with urgent priority queue 00:15:19.813 Starting thread on core 3 with urgent priority queue 00:15:19.813 Starting thread on core 0 with urgent priority queue 00:15:19.813 SPDK bdev Controller (SPDK1 ) core 0: 9042.00 IO/s 11.06 secs/100000 ios 00:15:19.813 SPDK bdev Controller (SPDK1 ) core 1: 10716.67 IO/s 9.33 secs/100000 ios 00:15:19.813 SPDK bdev Controller (SPDK1 ) core 2: 10259.67 IO/s 9.75 secs/100000 ios 00:15:19.813 SPDK bdev Controller (SPDK1 ) core 3: 11038.33 IO/s 9.06 secs/100000 ios 00:15:19.813 ======================================================== 00:15:19.813 00:15:19.813 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.813 [2024-11-15 10:55:39.249969] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.813 Initializing NVMe Controllers 00:15:19.813 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.813 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.813 Namespace ID: 1 size: 0GB 00:15:19.813 Initialization complete. 00:15:19.813 INFO: using host memory buffer for IO 00:15:19.813 Hello world! 00:15:19.813 [2024-11-15 10:55:39.284190] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.813 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.073 [2024-11-15 10:55:39.522990] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.015 Initializing NVMe Controllers 00:15:21.015 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.015 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.015 Initialization complete. Launching workers. 00:15:21.015 submit (in ns) avg, min, max = 6958.5, 2814.2, 3998850.8 00:15:21.015 complete (in ns) avg, min, max = 15487.4, 1635.0, 4024843.3 00:15:21.015 00:15:21.015 Submit histogram 00:15:21.015 ================ 00:15:21.015 Range in us Cumulative Count 00:15:21.015 2.813 - 2.827: 0.2597% ( 52) 00:15:21.015 2.827 - 2.840: 1.1286% ( 174) 00:15:21.015 2.840 - 2.853: 3.3310% ( 441) 00:15:21.015 2.853 - 2.867: 7.8955% ( 914) 00:15:21.015 2.867 - 2.880: 12.4101% ( 904) 00:15:21.015 2.880 - 2.893: 18.4379% ( 1207) 00:15:21.015 2.893 - 2.907: 24.9600% ( 1306) 00:15:21.015 2.907 - 2.920: 30.9778% ( 1205) 00:15:21.015 2.920 - 2.933: 37.1654% ( 1239) 00:15:21.015 2.933 - 2.947: 42.7537% ( 1119) 00:15:21.015 2.947 - 2.960: 47.6428% ( 979) 00:15:21.015 2.960 - 2.973: 53.4559% ( 1164) 00:15:21.015 2.973 - 2.987: 61.3464% ( 1580) 00:15:21.015 2.987 - 3.000: 69.8462% ( 1702) 00:15:21.015 3.000 - 3.013: 78.0114% ( 1635) 00:15:21.015 3.013 - 3.027: 84.9081% ( 1381) 00:15:21.015 3.027 - 3.040: 90.6912% ( 1158) 00:15:21.015 3.040 - 3.053: 94.2419% ( 711) 00:15:21.015 3.053 - 3.067: 96.9586% ( 544) 00:15:21.015 3.067 - 3.080: 98.3320% ( 275) 00:15:21.015 3.080 - 3.093: 98.9962% ( 133) 00:15:21.015 3.093 - 3.107: 99.2809% ( 57) 00:15:21.015 3.107 - 3.120: 99.4357% ( 31) 00:15:21.015 3.120 - 3.133: 99.4956% ( 12) 00:15:21.015 3.133 - 3.147: 99.5256% ( 6) 00:15:21.015 3.147 - 3.160: 99.5306% ( 1) 00:15:21.015 3.253 - 3.267: 99.5356% ( 1) 00:15:21.015 3.347 - 3.360: 99.5406% ( 1) 00:15:21.015 3.467 - 3.493: 99.5455% ( 1) 00:15:21.015 3.520 - 3.547: 99.5555% ( 2) 00:15:21.015 3.547 - 3.573: 99.5605% ( 1) 00:15:21.015 3.573 - 3.600: 99.5655% ( 1) 00:15:21.015 3.707 - 3.733: 99.5705% ( 1) 00:15:21.015 3.813 - 3.840: 99.5755% ( 1) 00:15:21.015 3.947 - 3.973: 99.5805% ( 1) 00:15:21.015 4.400 - 4.427: 99.5905% ( 2) 00:15:21.015 4.480 - 4.507: 99.5955% ( 1) 00:15:21.015 4.533 - 4.560: 99.6055% ( 2) 00:15:21.015 4.800 - 4.827: 99.6105% ( 1) 00:15:21.015 4.827 - 4.853: 99.6155% ( 1) 00:15:21.015 4.853 - 4.880: 99.6205% ( 1) 00:15:21.015 4.907 - 4.933: 99.6354% ( 3) 00:15:21.015 4.960 - 4.987: 99.6404% ( 1) 00:15:21.015 5.013 - 5.040: 99.6504% ( 2) 00:15:21.015 5.040 - 5.067: 99.6654% ( 3) 00:15:21.015 5.067 - 5.093: 99.6754% ( 2) 00:15:21.015 5.093 - 5.120: 99.6854% ( 2) 00:15:21.015 5.147 - 5.173: 99.6954% ( 2) 00:15:21.015 5.200 - 5.227: 99.7004% ( 1) 00:15:21.015 5.227 - 5.253: 99.7054% ( 1) 00:15:21.015 5.253 - 5.280: 99.7103% ( 1) 00:15:21.015 5.280 - 5.307: 99.7203% ( 2) 00:15:21.015 5.333 - 5.360: 99.7253% ( 1) 00:15:21.015 5.387 - 5.413: 99.7353% ( 2) 00:15:21.015 5.413 - 5.440: 99.7403% ( 1) 00:15:21.015 5.440 - 5.467: 99.7453% ( 1) 00:15:21.015 5.467 - 5.493: 99.7503% ( 1) 00:15:21.015 5.520 - 5.547: 99.7553% ( 1) 00:15:21.015 5.573 - 5.600: 99.7603% ( 1) 00:15:21.015 5.707 - 5.733: 99.7653% ( 1) 00:15:21.015 5.813 - 5.840: 99.7703% ( 1) 00:15:21.015 5.867 - 5.893: 99.7803% ( 2) 00:15:21.015 5.920 - 5.947: 99.7903% ( 2) 00:15:21.015 6.000 - 6.027: 99.7952% ( 1) 00:15:21.015 6.107 - 6.133: 99.8052% ( 2) 00:15:21.015 6.133 - 6.160: 99.8102% ( 1) 00:15:21.015 6.160 - 6.187: 99.8202% ( 2) 00:15:21.015 6.213 - 6.240: 99.8252% ( 1) 00:15:21.015 6.267 - 6.293: 99.8302% ( 1) 00:15:21.015 6.293 - 6.320: 99.8402% ( 2) 00:15:21.015 6.347 - 6.373: 99.8502% ( 2) 00:15:21.015 6.373 - 6.400: 99.8552% ( 1) 00:15:21.015 6.480 - 6.507: 99.8602% ( 1) 00:15:21.015 6.507 - 6.533: 99.8652% ( 1) 00:15:21.015 6.533 - 6.560: 99.8702% ( 1) 00:15:21.015 6.587 - 6.613: 99.8751% ( 1) 00:15:21.015 7.573 - 7.627: 99.8801% ( 1) 00:15:21.015 8.053 - 8.107: 99.8851% ( 1) 00:15:21.015 9.333 - 9.387: 99.8901% ( 1) 00:15:21.015 [2024-11-15 10:55:40.543552] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.277 10.293 - 10.347: 99.8951% ( 1) 00:15:21.277 33.067 - 33.280: 99.9001% ( 1) 00:15:21.277 3986.773 - 4014.080: 100.0000% ( 20) 00:15:21.277 00:15:21.277 Complete histogram 00:15:21.277 ================== 00:15:21.277 Range in us Cumulative Count 00:15:21.277 1.633 - 1.640: 0.4844% ( 97) 00:15:21.277 1.640 - 1.647: 0.8490% ( 73) 00:15:21.277 1.647 - 1.653: 0.8939% ( 9) 00:15:21.277 1.653 - 1.660: 1.0338% ( 28) 00:15:21.277 1.660 - 1.667: 1.1386% ( 21) 00:15:21.277 1.667 - 1.673: 1.1936% ( 11) 00:15:21.277 1.673 - 1.680: 1.1986% ( 1) 00:15:21.277 1.680 - 1.687: 3.4059% ( 442) 00:15:21.277 1.687 - 1.693: 40.0569% ( 7339) 00:15:21.277 1.693 - 1.700: 50.7641% ( 2144) 00:15:21.277 1.700 - 1.707: 58.7295% ( 1595) 00:15:21.277 1.707 - 1.720: 75.3346% ( 3325) 00:15:21.277 1.720 - 1.733: 81.9616% ( 1327) 00:15:21.277 1.733 - 1.747: 83.3799% ( 284) 00:15:21.277 1.747 - 1.760: 88.5288% ( 1031) 00:15:21.277 1.760 - 1.773: 94.4317% ( 1182) 00:15:21.277 1.773 - 1.787: 97.4131% ( 597) 00:15:21.277 1.787 - 1.800: 99.0262% ( 323) 00:15:21.277 1.800 - 1.813: 99.4157% ( 78) 00:15:21.277 1.813 - 1.827: 99.4856% ( 14) 00:15:21.277 1.827 - 1.840: 99.4956% ( 2) 00:15:21.277 1.840 - 1.853: 99.5006% ( 1) 00:15:21.277 1.853 - 1.867: 99.5056% ( 1) 00:15:21.277 1.933 - 1.947: 99.5106% ( 1) 00:15:21.277 1.960 - 1.973: 99.5156% ( 1) 00:15:21.277 1.987 - 2.000: 99.5256% ( 2) 00:15:21.277 2.040 - 2.053: 99.5306% ( 1) 00:15:21.277 2.093 - 2.107: 99.5356% ( 1) 00:15:21.277 3.373 - 3.387: 99.5406% ( 1) 00:15:21.277 3.840 - 3.867: 99.5455% ( 1) 00:15:21.277 3.893 - 3.920: 99.5505% ( 1) 00:15:21.277 3.947 - 3.973: 99.5555% ( 1) 00:15:21.277 4.000 - 4.027: 99.5605% ( 1) 00:15:21.277 4.027 - 4.053: 99.5705% ( 2) 00:15:21.277 4.480 - 4.507: 99.5755% ( 1) 00:15:21.277 4.507 - 4.533: 99.5905% ( 3) 00:15:21.277 4.613 - 4.640: 99.5955% ( 1) 00:15:21.277 4.667 - 4.693: 99.6005% ( 1) 00:15:21.277 4.827 - 4.853: 99.6055% ( 1) 00:15:21.277 4.853 - 4.880: 99.6105% ( 1) 00:15:21.277 4.987 - 5.013: 99.6155% ( 1) 00:15:21.277 5.067 - 5.093: 99.6205% ( 1) 00:15:21.277 5.200 - 5.227: 99.6254% ( 1) 00:15:21.277 5.227 - 5.253: 99.6304% ( 1) 00:15:21.277 5.680 - 5.707: 99.6354% ( 1) 00:15:21.277 7.947 - 8.000: 99.6404% ( 1) 00:15:21.277 10.987 - 11.040: 99.6454% ( 1) 00:15:21.277 64.427 - 64.853: 99.6504% ( 1) 00:15:21.277 82.347 - 82.773: 99.6554% ( 1) 00:15:21.277 3986.773 - 4014.080: 99.9900% ( 67) 00:15:21.277 4014.080 - 4041.387: 100.0000% ( 2) 00:15:21.277 00:15:21.277 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:21.277 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.277 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.277 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:21.277 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.277 [ 00:15:21.277 { 00:15:21.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.277 "subtype": "Discovery", 00:15:21.277 "listen_addresses": [], 00:15:21.277 "allow_any_host": true, 00:15:21.277 "hosts": [] 00:15:21.277 }, 00:15:21.277 { 00:15:21.277 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.277 "subtype": "NVMe", 00:15:21.277 "listen_addresses": [ 00:15:21.277 { 00:15:21.277 "trtype": "VFIOUSER", 00:15:21.277 "adrfam": "IPv4", 00:15:21.277 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.277 "trsvcid": "0" 00:15:21.277 } 00:15:21.277 ], 00:15:21.277 "allow_any_host": true, 00:15:21.277 "hosts": [], 00:15:21.277 "serial_number": "SPDK1", 00:15:21.277 "model_number": "SPDK bdev Controller", 00:15:21.277 "max_namespaces": 32, 00:15:21.277 "min_cntlid": 1, 00:15:21.277 "max_cntlid": 65519, 00:15:21.277 "namespaces": [ 00:15:21.277 { 00:15:21.277 "nsid": 1, 00:15:21.277 "bdev_name": "Malloc1", 00:15:21.277 "name": "Malloc1", 00:15:21.277 "nguid": "4E0FA8D2906E49B384DE9D4BF44937C8", 00:15:21.277 "uuid": "4e0fa8d2-906e-49b3-84de-9d4bf44937c8" 00:15:21.277 } 00:15:21.277 ] 00:15:21.277 }, 00:15:21.277 { 00:15:21.277 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.277 "subtype": "NVMe", 00:15:21.277 "listen_addresses": [ 00:15:21.277 { 00:15:21.277 "trtype": "VFIOUSER", 00:15:21.277 "adrfam": "IPv4", 00:15:21.277 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.277 "trsvcid": "0" 00:15:21.277 } 00:15:21.277 ], 00:15:21.277 "allow_any_host": true, 00:15:21.277 "hosts": [], 00:15:21.277 "serial_number": "SPDK2", 00:15:21.277 "model_number": "SPDK bdev Controller", 00:15:21.278 "max_namespaces": 32, 00:15:21.278 "min_cntlid": 1, 00:15:21.278 "max_cntlid": 65519, 00:15:21.278 "namespaces": [ 00:15:21.278 { 00:15:21.278 "nsid": 1, 00:15:21.278 "bdev_name": "Malloc2", 00:15:21.278 "name": "Malloc2", 00:15:21.278 "nguid": "30BA6270C13C4EDA998095EE2CCDE0F4", 00:15:21.278 "uuid": "30ba6270-c13c-4eda-9980-95ee2ccde0f4" 00:15:21.278 } 00:15:21.278 ] 00:15:21.278 } 00:15:21.278 ] 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=345715 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.278 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:21.540 [2024-11-15 10:55:40.917873] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.540 Malloc3 00:15:21.540 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:21.800 [2024-11-15 10:55:41.120257] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.800 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.800 Asynchronous Event Request test 00:15:21.800 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.800 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.800 Registering asynchronous event callbacks... 00:15:21.800 Starting namespace attribute notice tests for all controllers... 00:15:21.800 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.800 aer_cb - Changed Namespace 00:15:21.800 Cleaning up... 00:15:21.800 [ 00:15:21.800 { 00:15:21.800 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.800 "subtype": "Discovery", 00:15:21.800 "listen_addresses": [], 00:15:21.800 "allow_any_host": true, 00:15:21.800 "hosts": [] 00:15:21.800 }, 00:15:21.800 { 00:15:21.800 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.800 "subtype": "NVMe", 00:15:21.800 "listen_addresses": [ 00:15:21.800 { 00:15:21.800 "trtype": "VFIOUSER", 00:15:21.800 "adrfam": "IPv4", 00:15:21.800 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.800 "trsvcid": "0" 00:15:21.800 } 00:15:21.800 ], 00:15:21.800 "allow_any_host": true, 00:15:21.800 "hosts": [], 00:15:21.800 "serial_number": "SPDK1", 00:15:21.800 "model_number": "SPDK bdev Controller", 00:15:21.800 "max_namespaces": 32, 00:15:21.800 "min_cntlid": 1, 00:15:21.800 "max_cntlid": 65519, 00:15:21.800 "namespaces": [ 00:15:21.800 { 00:15:21.800 "nsid": 1, 00:15:21.800 "bdev_name": "Malloc1", 00:15:21.800 "name": "Malloc1", 00:15:21.800 "nguid": "4E0FA8D2906E49B384DE9D4BF44937C8", 00:15:21.800 "uuid": "4e0fa8d2-906e-49b3-84de-9d4bf44937c8" 00:15:21.800 }, 00:15:21.800 { 00:15:21.800 "nsid": 2, 00:15:21.800 "bdev_name": "Malloc3", 00:15:21.800 "name": "Malloc3", 00:15:21.800 "nguid": "9A7959213E164A27BCC0E4D627AAA5F0", 00:15:21.800 "uuid": "9a795921-3e16-4a27-bcc0-e4d627aaa5f0" 00:15:21.800 } 00:15:21.800 ] 00:15:21.800 }, 00:15:21.800 { 00:15:21.800 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.800 "subtype": "NVMe", 00:15:21.800 "listen_addresses": [ 00:15:21.800 { 00:15:21.800 "trtype": "VFIOUSER", 00:15:21.800 "adrfam": "IPv4", 00:15:21.800 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.800 "trsvcid": "0" 00:15:21.800 } 00:15:21.800 ], 00:15:21.800 "allow_any_host": true, 00:15:21.800 "hosts": [], 00:15:21.800 "serial_number": "SPDK2", 00:15:21.800 "model_number": "SPDK bdev Controller", 00:15:21.800 "max_namespaces": 32, 00:15:21.800 "min_cntlid": 1, 00:15:21.800 "max_cntlid": 65519, 00:15:21.800 "namespaces": [ 00:15:21.800 { 00:15:21.800 "nsid": 1, 00:15:21.800 "bdev_name": "Malloc2", 00:15:21.800 "name": "Malloc2", 00:15:21.800 "nguid": "30BA6270C13C4EDA998095EE2CCDE0F4", 00:15:21.800 "uuid": "30ba6270-c13c-4eda-9980-95ee2ccde0f4" 00:15:21.800 } 00:15:21.800 ] 00:15:21.800 } 00:15:21.800 ] 00:15:22.063 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 345715 00:15:22.063 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.063 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.063 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.063 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:22.063 [2024-11-15 10:55:41.363136] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:15:22.063 [2024-11-15 10:55:41.363176] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345896 ] 00:15:22.063 [2024-11-15 10:55:41.400965] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:22.063 [2024-11-15 10:55:41.406145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.063 [2024-11-15 10:55:41.406165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0b16897000 00:15:22.063 [2024-11-15 10:55:41.407147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.408156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.409163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.410168] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.411178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.412187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.413189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.414193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.063 [2024-11-15 10:55:41.415200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.063 [2024-11-15 10:55:41.415209] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0b1688c000 00:15:22.063 [2024-11-15 10:55:41.416121] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.063 [2024-11-15 10:55:41.429843] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:22.063 [2024-11-15 10:55:41.429863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:22.063 [2024-11-15 10:55:41.431914] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.063 [2024-11-15 10:55:41.431949] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.064 [2024-11-15 10:55:41.432008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:22.064 [2024-11-15 10:55:41.432021] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:22.064 [2024-11-15 10:55:41.432025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:22.064 [2024-11-15 10:55:41.432921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:22.064 [2024-11-15 10:55:41.432929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:22.064 [2024-11-15 10:55:41.432934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:22.064 [2024-11-15 10:55:41.433924] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.064 [2024-11-15 10:55:41.433934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:22.064 [2024-11-15 10:55:41.433940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.064 [2024-11-15 10:55:41.434932] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:22.064 [2024-11-15 10:55:41.434939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.064 [2024-11-15 10:55:41.435939] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:22.064 [2024-11-15 10:55:41.435946] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:22.064 [2024-11-15 10:55:41.435950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:22.064 [2024-11-15 10:55:41.435954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.064 [2024-11-15 10:55:41.436061] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:22.064 [2024-11-15 10:55:41.436064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.064 [2024-11-15 10:55:41.436067] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:22.064 [2024-11-15 10:55:41.440568] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:22.064 [2024-11-15 10:55:41.440982] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:22.064 [2024-11-15 10:55:41.441992] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.064 [2024-11-15 10:55:41.442997] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.064 [2024-11-15 10:55:41.443030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.064 [2024-11-15 10:55:41.444008] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:22.064 [2024-11-15 10:55:41.444016] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.064 [2024-11-15 10:55:41.444019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.444034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:22.064 [2024-11-15 10:55:41.444040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.444049] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.064 [2024-11-15 10:55:41.444053] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.064 [2024-11-15 10:55:41.444055] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.064 [2024-11-15 10:55:41.444064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.451569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.064 [2024-11-15 10:55:41.451579] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:22.064 [2024-11-15 10:55:41.451583] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:22.064 [2024-11-15 10:55:41.451587] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:22.064 [2024-11-15 10:55:41.451590] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.064 [2024-11-15 10:55:41.451595] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:22.064 [2024-11-15 10:55:41.451599] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:22.064 [2024-11-15 10:55:41.451602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.451609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.451616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.459567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.064 [2024-11-15 10:55:41.459577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.064 [2024-11-15 10:55:41.459583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.064 [2024-11-15 10:55:41.459590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.064 [2024-11-15 10:55:41.459596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.064 [2024-11-15 10:55:41.459599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.459604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.459611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.467566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.064 [2024-11-15 10:55:41.467574] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:22.064 [2024-11-15 10:55:41.467578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.467583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.467587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.467594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.475566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.064 [2024-11-15 10:55:41.475615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.475621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.475627] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.064 [2024-11-15 10:55:41.475630] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.064 [2024-11-15 10:55:41.475632] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.064 [2024-11-15 10:55:41.475637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.483566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.064 [2024-11-15 10:55:41.483576] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:22.064 [2024-11-15 10:55:41.483585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.483591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.483596] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.064 [2024-11-15 10:55:41.483599] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.064 [2024-11-15 10:55:41.483602] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.064 [2024-11-15 10:55:41.483606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.491567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.064 [2024-11-15 10:55:41.491580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.491586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.064 [2024-11-15 10:55:41.491591] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.064 [2024-11-15 10:55:41.491594] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.064 [2024-11-15 10:55:41.491596] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.064 [2024-11-15 10:55:41.491601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.064 [2024-11-15 10:55:41.499566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.499574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499602] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.065 [2024-11-15 10:55:41.499606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:22.065 [2024-11-15 10:55:41.499610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:22.065 [2024-11-15 10:55:41.499623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.507569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.507579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.515566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.515577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.523566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.523576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.531566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.531578] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.065 [2024-11-15 10:55:41.531582] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.065 [2024-11-15 10:55:41.531584] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.065 [2024-11-15 10:55:41.531587] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.065 [2024-11-15 10:55:41.531589] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:22.065 [2024-11-15 10:55:41.531594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.065 [2024-11-15 10:55:41.531599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.065 [2024-11-15 10:55:41.531602] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.065 [2024-11-15 10:55:41.531605] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.065 [2024-11-15 10:55:41.531609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.531614] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.065 [2024-11-15 10:55:41.531617] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.065 [2024-11-15 10:55:41.531620] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.065 [2024-11-15 10:55:41.531624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.531631] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.065 [2024-11-15 10:55:41.531634] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.065 [2024-11-15 10:55:41.531636] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.065 [2024-11-15 10:55:41.531641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.065 [2024-11-15 10:55:41.539567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.539579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.539586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.065 [2024-11-15 10:55:41.539592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.065 ===================================================== 00:15:22.065 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.065 ===================================================== 00:15:22.065 Controller Capabilities/Features 00:15:22.065 ================================ 00:15:22.065 Vendor ID: 4e58 00:15:22.065 Subsystem Vendor ID: 4e58 00:15:22.065 Serial Number: SPDK2 00:15:22.065 Model Number: SPDK bdev Controller 00:15:22.065 Firmware Version: 25.01 00:15:22.065 Recommended Arb Burst: 6 00:15:22.065 IEEE OUI Identifier: 8d 6b 50 00:15:22.065 Multi-path I/O 00:15:22.065 May have multiple subsystem ports: Yes 00:15:22.065 May have multiple controllers: Yes 00:15:22.065 Associated with SR-IOV VF: No 00:15:22.065 Max Data Transfer Size: 131072 00:15:22.065 Max Number of Namespaces: 32 00:15:22.065 Max Number of I/O Queues: 127 00:15:22.065 NVMe Specification Version (VS): 1.3 00:15:22.065 NVMe Specification Version (Identify): 1.3 00:15:22.065 Maximum Queue Entries: 256 00:15:22.065 Contiguous Queues Required: Yes 00:15:22.065 Arbitration Mechanisms Supported 00:15:22.065 Weighted Round Robin: Not Supported 00:15:22.065 Vendor Specific: Not Supported 00:15:22.065 Reset Timeout: 15000 ms 00:15:22.065 Doorbell Stride: 4 bytes 00:15:22.065 NVM Subsystem Reset: Not Supported 00:15:22.065 Command Sets Supported 00:15:22.065 NVM Command Set: Supported 00:15:22.065 Boot Partition: Not Supported 00:15:22.065 Memory Page Size Minimum: 4096 bytes 00:15:22.065 Memory Page Size Maximum: 4096 bytes 00:15:22.065 Persistent Memory Region: Not Supported 00:15:22.065 Optional Asynchronous Events Supported 00:15:22.065 Namespace Attribute Notices: Supported 00:15:22.065 Firmware Activation Notices: Not Supported 00:15:22.065 ANA Change Notices: Not Supported 00:15:22.065 PLE Aggregate Log Change Notices: Not Supported 00:15:22.065 LBA Status Info Alert Notices: Not Supported 00:15:22.065 EGE Aggregate Log Change Notices: Not Supported 00:15:22.065 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.065 Zone Descriptor Change Notices: Not Supported 00:15:22.065 Discovery Log Change Notices: Not Supported 00:15:22.065 Controller Attributes 00:15:22.065 128-bit Host Identifier: Supported 00:15:22.065 Non-Operational Permissive Mode: Not Supported 00:15:22.065 NVM Sets: Not Supported 00:15:22.065 Read Recovery Levels: Not Supported 00:15:22.065 Endurance Groups: Not Supported 00:15:22.065 Predictable Latency Mode: Not Supported 00:15:22.065 Traffic Based Keep ALive: Not Supported 00:15:22.065 Namespace Granularity: Not Supported 00:15:22.065 SQ Associations: Not Supported 00:15:22.065 UUID List: Not Supported 00:15:22.065 Multi-Domain Subsystem: Not Supported 00:15:22.065 Fixed Capacity Management: Not Supported 00:15:22.065 Variable Capacity Management: Not Supported 00:15:22.065 Delete Endurance Group: Not Supported 00:15:22.065 Delete NVM Set: Not Supported 00:15:22.065 Extended LBA Formats Supported: Not Supported 00:15:22.065 Flexible Data Placement Supported: Not Supported 00:15:22.065 00:15:22.065 Controller Memory Buffer Support 00:15:22.065 ================================ 00:15:22.065 Supported: No 00:15:22.065 00:15:22.065 Persistent Memory Region Support 00:15:22.065 ================================ 00:15:22.065 Supported: No 00:15:22.065 00:15:22.065 Admin Command Set Attributes 00:15:22.065 ============================ 00:15:22.065 Security Send/Receive: Not Supported 00:15:22.065 Format NVM: Not Supported 00:15:22.065 Firmware Activate/Download: Not Supported 00:15:22.065 Namespace Management: Not Supported 00:15:22.065 Device Self-Test: Not Supported 00:15:22.065 Directives: Not Supported 00:15:22.065 NVMe-MI: Not Supported 00:15:22.065 Virtualization Management: Not Supported 00:15:22.065 Doorbell Buffer Config: Not Supported 00:15:22.065 Get LBA Status Capability: Not Supported 00:15:22.065 Command & Feature Lockdown Capability: Not Supported 00:15:22.065 Abort Command Limit: 4 00:15:22.065 Async Event Request Limit: 4 00:15:22.065 Number of Firmware Slots: N/A 00:15:22.065 Firmware Slot 1 Read-Only: N/A 00:15:22.065 Firmware Activation Without Reset: N/A 00:15:22.065 Multiple Update Detection Support: N/A 00:15:22.065 Firmware Update Granularity: No Information Provided 00:15:22.065 Per-Namespace SMART Log: No 00:15:22.065 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.065 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:22.065 Command Effects Log Page: Supported 00:15:22.065 Get Log Page Extended Data: Supported 00:15:22.065 Telemetry Log Pages: Not Supported 00:15:22.065 Persistent Event Log Pages: Not Supported 00:15:22.065 Supported Log Pages Log Page: May Support 00:15:22.065 Commands Supported & Effects Log Page: Not Supported 00:15:22.065 Feature Identifiers & Effects Log Page:May Support 00:15:22.065 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.066 Data Area 4 for Telemetry Log: Not Supported 00:15:22.066 Error Log Page Entries Supported: 128 00:15:22.066 Keep Alive: Supported 00:15:22.066 Keep Alive Granularity: 10000 ms 00:15:22.066 00:15:22.066 NVM Command Set Attributes 00:15:22.066 ========================== 00:15:22.066 Submission Queue Entry Size 00:15:22.066 Max: 64 00:15:22.066 Min: 64 00:15:22.066 Completion Queue Entry Size 00:15:22.066 Max: 16 00:15:22.066 Min: 16 00:15:22.066 Number of Namespaces: 32 00:15:22.066 Compare Command: Supported 00:15:22.066 Write Uncorrectable Command: Not Supported 00:15:22.066 Dataset Management Command: Supported 00:15:22.066 Write Zeroes Command: Supported 00:15:22.066 Set Features Save Field: Not Supported 00:15:22.066 Reservations: Not Supported 00:15:22.066 Timestamp: Not Supported 00:15:22.066 Copy: Supported 00:15:22.066 Volatile Write Cache: Present 00:15:22.066 Atomic Write Unit (Normal): 1 00:15:22.066 Atomic Write Unit (PFail): 1 00:15:22.066 Atomic Compare & Write Unit: 1 00:15:22.066 Fused Compare & Write: Supported 00:15:22.066 Scatter-Gather List 00:15:22.066 SGL Command Set: Supported (Dword aligned) 00:15:22.066 SGL Keyed: Not Supported 00:15:22.066 SGL Bit Bucket Descriptor: Not Supported 00:15:22.066 SGL Metadata Pointer: Not Supported 00:15:22.066 Oversized SGL: Not Supported 00:15:22.066 SGL Metadata Address: Not Supported 00:15:22.066 SGL Offset: Not Supported 00:15:22.066 Transport SGL Data Block: Not Supported 00:15:22.066 Replay Protected Memory Block: Not Supported 00:15:22.066 00:15:22.066 Firmware Slot Information 00:15:22.066 ========================= 00:15:22.066 Active slot: 1 00:15:22.066 Slot 1 Firmware Revision: 25.01 00:15:22.066 00:15:22.066 00:15:22.066 Commands Supported and Effects 00:15:22.066 ============================== 00:15:22.066 Admin Commands 00:15:22.066 -------------- 00:15:22.066 Get Log Page (02h): Supported 00:15:22.066 Identify (06h): Supported 00:15:22.066 Abort (08h): Supported 00:15:22.066 Set Features (09h): Supported 00:15:22.066 Get Features (0Ah): Supported 00:15:22.066 Asynchronous Event Request (0Ch): Supported 00:15:22.066 Keep Alive (18h): Supported 00:15:22.066 I/O Commands 00:15:22.066 ------------ 00:15:22.066 Flush (00h): Supported LBA-Change 00:15:22.066 Write (01h): Supported LBA-Change 00:15:22.066 Read (02h): Supported 00:15:22.066 Compare (05h): Supported 00:15:22.066 Write Zeroes (08h): Supported LBA-Change 00:15:22.066 Dataset Management (09h): Supported LBA-Change 00:15:22.066 Copy (19h): Supported LBA-Change 00:15:22.066 00:15:22.066 Error Log 00:15:22.066 ========= 00:15:22.066 00:15:22.066 Arbitration 00:15:22.066 =========== 00:15:22.066 Arbitration Burst: 1 00:15:22.066 00:15:22.066 Power Management 00:15:22.066 ================ 00:15:22.066 Number of Power States: 1 00:15:22.066 Current Power State: Power State #0 00:15:22.066 Power State #0: 00:15:22.066 Max Power: 0.00 W 00:15:22.066 Non-Operational State: Operational 00:15:22.066 Entry Latency: Not Reported 00:15:22.066 Exit Latency: Not Reported 00:15:22.066 Relative Read Throughput: 0 00:15:22.066 Relative Read Latency: 0 00:15:22.066 Relative Write Throughput: 0 00:15:22.066 Relative Write Latency: 0 00:15:22.066 Idle Power: Not Reported 00:15:22.066 Active Power: Not Reported 00:15:22.066 Non-Operational Permissive Mode: Not Supported 00:15:22.066 00:15:22.066 Health Information 00:15:22.066 ================== 00:15:22.066 Critical Warnings: 00:15:22.066 Available Spare Space: OK 00:15:22.066 Temperature: OK 00:15:22.066 Device Reliability: OK 00:15:22.066 Read Only: No 00:15:22.066 Volatile Memory Backup: OK 00:15:22.066 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:22.066 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.066 Available Spare: 0% 00:15:22.066 Available Sp[2024-11-15 10:55:41.539668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.066 [2024-11-15 10:55:41.547568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.066 [2024-11-15 10:55:41.547592] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:22.066 [2024-11-15 10:55:41.547598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.066 [2024-11-15 10:55:41.547603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.066 [2024-11-15 10:55:41.547608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.066 [2024-11-15 10:55:41.547612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.066 [2024-11-15 10:55:41.547652] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.066 [2024-11-15 10:55:41.547660] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:22.066 [2024-11-15 10:55:41.548654] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.066 [2024-11-15 10:55:41.548691] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:22.066 [2024-11-15 10:55:41.548696] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:22.066 [2024-11-15 10:55:41.549665] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:22.066 [2024-11-15 10:55:41.549674] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:22.066 [2024-11-15 10:55:41.549717] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:22.066 [2024-11-15 10:55:41.550680] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.066 are Threshold: 0% 00:15:22.066 Life Percentage Used: 0% 00:15:22.066 Data Units Read: 0 00:15:22.066 Data Units Written: 0 00:15:22.066 Host Read Commands: 0 00:15:22.066 Host Write Commands: 0 00:15:22.066 Controller Busy Time: 0 minutes 00:15:22.066 Power Cycles: 0 00:15:22.066 Power On Hours: 0 hours 00:15:22.066 Unsafe Shutdowns: 0 00:15:22.066 Unrecoverable Media Errors: 0 00:15:22.066 Lifetime Error Log Entries: 0 00:15:22.066 Warning Temperature Time: 0 minutes 00:15:22.066 Critical Temperature Time: 0 minutes 00:15:22.066 00:15:22.066 Number of Queues 00:15:22.066 ================ 00:15:22.066 Number of I/O Submission Queues: 127 00:15:22.066 Number of I/O Completion Queues: 127 00:15:22.066 00:15:22.066 Active Namespaces 00:15:22.066 ================= 00:15:22.066 Namespace ID:1 00:15:22.066 Error Recovery Timeout: Unlimited 00:15:22.066 Command Set Identifier: NVM (00h) 00:15:22.066 Deallocate: Supported 00:15:22.066 Deallocated/Unwritten Error: Not Supported 00:15:22.066 Deallocated Read Value: Unknown 00:15:22.066 Deallocate in Write Zeroes: Not Supported 00:15:22.066 Deallocated Guard Field: 0xFFFF 00:15:22.066 Flush: Supported 00:15:22.066 Reservation: Supported 00:15:22.066 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.066 Size (in LBAs): 131072 (0GiB) 00:15:22.066 Capacity (in LBAs): 131072 (0GiB) 00:15:22.066 Utilization (in LBAs): 131072 (0GiB) 00:15:22.066 NGUID: 30BA6270C13C4EDA998095EE2CCDE0F4 00:15:22.066 UUID: 30ba6270-c13c-4eda-9980-95ee2ccde0f4 00:15:22.066 Thin Provisioning: Not Supported 00:15:22.066 Per-NS Atomic Units: Yes 00:15:22.066 Atomic Boundary Size (Normal): 0 00:15:22.066 Atomic Boundary Size (PFail): 0 00:15:22.066 Atomic Boundary Offset: 0 00:15:22.066 Maximum Single Source Range Length: 65535 00:15:22.066 Maximum Copy Length: 65535 00:15:22.066 Maximum Source Range Count: 1 00:15:22.066 NGUID/EUI64 Never Reused: No 00:15:22.066 Namespace Write Protected: No 00:15:22.066 Number of LBA Formats: 1 00:15:22.066 Current LBA Format: LBA Format #00 00:15:22.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.066 00:15:22.066 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.327 [2024-11-15 10:55:41.738625] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.613 Initializing NVMe Controllers 00:15:27.613 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.613 Initialization complete. Launching workers. 00:15:27.613 ======================================================== 00:15:27.613 Latency(us) 00:15:27.613 Device Information : IOPS MiB/s Average min max 00:15:27.613 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40003.80 156.26 3199.54 841.67 9285.01 00:15:27.613 ======================================================== 00:15:27.613 Total : 40003.80 156.26 3199.54 841.67 9285.01 00:15:27.613 00:15:27.613 [2024-11-15 10:55:46.842758] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.613 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.613 [2024-11-15 10:55:47.040365] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.899 Initializing NVMe Controllers 00:15:32.899 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.899 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:32.899 Initialization complete. Launching workers. 00:15:32.899 ======================================================== 00:15:32.899 Latency(us) 00:15:32.899 Device Information : IOPS MiB/s Average min max 00:15:32.899 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39993.38 156.22 3200.79 853.03 7347.50 00:15:32.899 ======================================================== 00:15:32.899 Total : 39993.38 156.22 3200.79 853.03 7347.50 00:15:32.899 00:15:32.899 [2024-11-15 10:55:52.057983] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.899 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.899 [2024-11-15 10:55:52.263178] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.180 [2024-11-15 10:55:57.407653] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.180 Initializing NVMe Controllers 00:15:38.180 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.180 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:38.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:38.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:38.180 Initialization complete. Launching workers. 00:15:38.180 Starting thread on core 2 00:15:38.180 Starting thread on core 3 00:15:38.180 Starting thread on core 1 00:15:38.180 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:38.180 [2024-11-15 10:55:57.660936] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.480 [2024-11-15 10:56:00.744819] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.480 Initializing NVMe Controllers 00:15:41.480 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.480 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.480 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.480 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.480 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.480 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.480 Initialization complete. Launching workers. 00:15:41.480 Starting thread on core 1 with urgent priority queue 00:15:41.480 Starting thread on core 2 with urgent priority queue 00:15:41.480 Starting thread on core 3 with urgent priority queue 00:15:41.480 Starting thread on core 0 with urgent priority queue 00:15:41.480 SPDK bdev Controller (SPDK2 ) core 0: 12218.67 IO/s 8.18 secs/100000 ios 00:15:41.480 SPDK bdev Controller (SPDK2 ) core 1: 8042.67 IO/s 12.43 secs/100000 ios 00:15:41.480 SPDK bdev Controller (SPDK2 ) core 2: 8061.33 IO/s 12.40 secs/100000 ios 00:15:41.480 SPDK bdev Controller (SPDK2 ) core 3: 8088.33 IO/s 12.36 secs/100000 ios 00:15:41.480 ======================================================== 00:15:41.480 00:15:41.480 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.480 [2024-11-15 10:56:00.986923] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.480 Initializing NVMe Controllers 00:15:41.480 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.480 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.480 Namespace ID: 1 size: 0GB 00:15:41.480 Initialization complete. 00:15:41.480 INFO: using host memory buffer for IO 00:15:41.480 Hello world! 00:15:41.480 [2024-11-15 10:56:00.996983] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.741 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.741 [2024-11-15 10:56:01.235813] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.126 Initializing NVMe Controllers 00:15:43.126 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.126 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.126 Initialization complete. Launching workers. 00:15:43.126 submit (in ns) avg, min, max = 5720.9, 2824.2, 3998542.5 00:15:43.126 complete (in ns) avg, min, max = 15416.8, 1624.2, 4003187.5 00:15:43.126 00:15:43.126 Submit histogram 00:15:43.126 ================ 00:15:43.126 Range in us Cumulative Count 00:15:43.126 2.813 - 2.827: 0.0245% ( 5) 00:15:43.126 2.827 - 2.840: 0.9967% ( 198) 00:15:43.126 2.840 - 2.853: 2.8083% ( 369) 00:15:43.126 2.853 - 2.867: 5.8867% ( 627) 00:15:43.126 2.867 - 2.880: 10.4870% ( 937) 00:15:43.126 2.880 - 2.893: 16.3148% ( 1187) 00:15:43.126 2.893 - 2.907: 21.3767% ( 1031) 00:15:43.126 2.907 - 2.920: 27.1308% ( 1172) 00:15:43.126 2.920 - 2.933: 32.5363% ( 1101) 00:15:43.126 2.933 - 2.947: 37.8191% ( 1076) 00:15:43.126 2.947 - 2.960: 42.8024% ( 1015) 00:15:43.126 2.960 - 2.973: 47.8152% ( 1021) 00:15:43.126 2.973 - 2.987: 53.6675% ( 1192) 00:15:43.126 2.987 - 3.000: 62.1514% ( 1728) 00:15:43.126 3.000 - 3.013: 71.4700% ( 1898) 00:15:43.126 3.013 - 3.027: 80.3957% ( 1818) 00:15:43.126 3.027 - 3.040: 87.2889% ( 1404) 00:15:43.126 3.040 - 3.053: 92.4244% ( 1046) 00:15:43.126 3.053 - 3.067: 96.0772% ( 744) 00:15:43.126 3.067 - 3.080: 97.8594% ( 363) 00:15:43.126 3.080 - 3.093: 98.9051% ( 213) 00:15:43.126 3.093 - 3.107: 99.2832% ( 77) 00:15:43.126 3.107 - 3.120: 99.4648% ( 37) 00:15:43.126 3.120 - 3.133: 99.5238% ( 12) 00:15:43.126 3.133 - 3.147: 99.5729% ( 10) 00:15:43.126 3.147 - 3.160: 99.5876% ( 3) 00:15:43.126 3.160 - 3.173: 99.6023% ( 3) 00:15:43.126 3.173 - 3.187: 99.6072% ( 1) 00:15:43.126 3.267 - 3.280: 99.6121% ( 1) 00:15:43.126 3.293 - 3.307: 99.6170% ( 1) 00:15:43.126 3.333 - 3.347: 99.6220% ( 1) 00:15:43.126 3.400 - 3.413: 99.6269% ( 1) 00:15:43.126 3.440 - 3.467: 99.6318% ( 1) 00:15:43.126 3.467 - 3.493: 99.6367% ( 1) 00:15:43.126 3.493 - 3.520: 99.6416% ( 1) 00:15:43.126 3.520 - 3.547: 99.6465% ( 1) 00:15:43.126 3.573 - 3.600: 99.6514% ( 1) 00:15:43.126 3.600 - 3.627: 99.6563% ( 1) 00:15:43.126 3.920 - 3.947: 99.6612% ( 1) 00:15:43.126 4.187 - 4.213: 99.6661% ( 1) 00:15:43.126 4.293 - 4.320: 99.6711% ( 1) 00:15:43.126 4.533 - 4.560: 99.6760% ( 1) 00:15:43.126 4.613 - 4.640: 99.6809% ( 1) 00:15:43.126 4.640 - 4.667: 99.6858% ( 1) 00:15:43.126 4.693 - 4.720: 99.6907% ( 1) 00:15:43.126 4.720 - 4.747: 99.6956% ( 1) 00:15:43.126 4.747 - 4.773: 99.7005% ( 1) 00:15:43.126 4.880 - 4.907: 99.7054% ( 1) 00:15:43.126 4.907 - 4.933: 99.7152% ( 2) 00:15:43.126 4.960 - 4.987: 99.7251% ( 2) 00:15:43.126 4.987 - 5.013: 99.7300% ( 1) 00:15:43.126 5.013 - 5.040: 99.7349% ( 1) 00:15:43.126 5.147 - 5.173: 99.7447% ( 2) 00:15:43.126 5.173 - 5.200: 99.7496% ( 1) 00:15:43.126 5.200 - 5.227: 99.7545% ( 1) 00:15:43.126 5.227 - 5.253: 99.7692% ( 3) 00:15:43.126 5.280 - 5.307: 99.7742% ( 1) 00:15:43.126 5.333 - 5.360: 99.7791% ( 1) 00:15:43.126 5.387 - 5.413: 99.7889% ( 2) 00:15:43.126 5.467 - 5.493: 99.7938% ( 1) 00:15:43.126 5.600 - 5.627: 99.8036% ( 2) 00:15:43.126 5.787 - 5.813: 99.8085% ( 1) 00:15:43.126 5.813 - 5.840: 99.8183% ( 2) 00:15:43.126 5.867 - 5.893: 99.8233% ( 1) 00:15:43.126 5.947 - 5.973: 99.8282% ( 1) 00:15:43.126 6.027 - 6.053: 99.8380% ( 2) 00:15:43.126 6.080 - 6.107: 99.8429% ( 1) 00:15:43.126 6.187 - 6.213: 99.8478% ( 1) 00:15:43.126 6.267 - 6.293: 99.8527% ( 1) 00:15:43.126 6.373 - 6.400: 99.8674% ( 3) 00:15:43.126 6.427 - 6.453: 99.8723% ( 1) 00:15:43.126 6.453 - 6.480: 99.8773% ( 1) 00:15:43.126 6.480 - 6.507: 99.8822% ( 1) 00:15:43.126 6.533 - 6.560: 99.8920% ( 2) 00:15:43.126 6.773 - 6.800: 99.9067% ( 3) 00:15:43.126 6.880 - 6.933: 99.9116% ( 1) 00:15:43.126 6.933 - 6.987: 99.9165% ( 1) 00:15:43.126 7.253 - 7.307: 99.9214% ( 1) 00:15:43.126 7.360 - 7.413: 99.9264% ( 1) 00:15:43.126 [2024-11-15 10:56:02.330118] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.126 8.533 - 8.587: 99.9313% ( 1) 00:15:43.126 3986.773 - 4014.080: 100.0000% ( 14) 00:15:43.126 00:15:43.126 Complete histogram 00:15:43.126 ================== 00:15:43.126 Range in us Cumulative Count 00:15:43.126 1.620 - 1.627: 0.0049% ( 1) 00:15:43.126 1.633 - 1.640: 0.2258% ( 45) 00:15:43.126 1.640 - 1.647: 0.7905% ( 115) 00:15:43.126 1.647 - 1.653: 0.8690% ( 16) 00:15:43.126 1.653 - 1.660: 0.9819% ( 23) 00:15:43.126 1.660 - 1.667: 1.0703% ( 18) 00:15:43.126 1.667 - 1.673: 1.1096% ( 8) 00:15:43.126 1.673 - 1.680: 4.8802% ( 768) 00:15:43.126 1.680 - 1.687: 46.0183% ( 8379) 00:15:43.126 1.687 - 1.693: 53.0587% ( 1434) 00:15:43.126 1.693 - 1.700: 62.1072% ( 1843) 00:15:43.126 1.700 - 1.707: 72.3684% ( 2090) 00:15:43.126 1.707 - 1.720: 81.4169% ( 1843) 00:15:43.126 1.720 - 1.733: 83.6115% ( 447) 00:15:43.126 1.733 - 1.747: 86.7636% ( 642) 00:15:43.126 1.747 - 1.760: 91.9285% ( 1052) 00:15:43.126 1.760 - 1.773: 96.3374% ( 898) 00:15:43.126 1.773 - 1.787: 98.4633% ( 433) 00:15:43.126 1.787 - 1.800: 99.3126% ( 173) 00:15:43.126 1.800 - 1.813: 99.5139% ( 41) 00:15:43.126 1.813 - 1.827: 99.5434% ( 6) 00:15:43.126 1.853 - 1.867: 99.5483% ( 1) 00:15:43.126 1.947 - 1.960: 99.5532% ( 1) 00:15:43.126 1.987 - 2.000: 99.5630% ( 2) 00:15:43.126 2.093 - 2.107: 99.5679% ( 1) 00:15:43.126 2.160 - 2.173: 99.5729% ( 1) 00:15:43.126 3.360 - 3.373: 99.5778% ( 1) 00:15:43.126 3.920 - 3.947: 99.5827% ( 1) 00:15:43.126 3.973 - 4.000: 99.5876% ( 1) 00:15:43.126 4.107 - 4.133: 99.5925% ( 1) 00:15:43.126 4.160 - 4.187: 99.5974% ( 1) 00:15:43.126 4.293 - 4.320: 99.6023% ( 1) 00:15:43.126 4.320 - 4.347: 99.6170% ( 3) 00:15:43.127 4.373 - 4.400: 99.6220% ( 1) 00:15:43.127 4.533 - 4.560: 99.6269% ( 1) 00:15:43.127 4.613 - 4.640: 99.6367% ( 2) 00:15:43.127 4.667 - 4.693: 99.6416% ( 1) 00:15:43.127 4.827 - 4.853: 99.6465% ( 1) 00:15:43.127 6.000 - 6.027: 99.6514% ( 1) 00:15:43.127 27.200 - 27.307: 99.6563% ( 1) 00:15:43.127 3604.480 - 3631.787: 99.6612% ( 1) 00:15:43.127 3986.773 - 4014.080: 100.0000% ( 69) 00:15:43.127 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.127 [ 00:15:43.127 { 00:15:43.127 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.127 "subtype": "Discovery", 00:15:43.127 "listen_addresses": [], 00:15:43.127 "allow_any_host": true, 00:15:43.127 "hosts": [] 00:15:43.127 }, 00:15:43.127 { 00:15:43.127 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.127 "subtype": "NVMe", 00:15:43.127 "listen_addresses": [ 00:15:43.127 { 00:15:43.127 "trtype": "VFIOUSER", 00:15:43.127 "adrfam": "IPv4", 00:15:43.127 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.127 "trsvcid": "0" 00:15:43.127 } 00:15:43.127 ], 00:15:43.127 "allow_any_host": true, 00:15:43.127 "hosts": [], 00:15:43.127 "serial_number": "SPDK1", 00:15:43.127 "model_number": "SPDK bdev Controller", 00:15:43.127 "max_namespaces": 32, 00:15:43.127 "min_cntlid": 1, 00:15:43.127 "max_cntlid": 65519, 00:15:43.127 "namespaces": [ 00:15:43.127 { 00:15:43.127 "nsid": 1, 00:15:43.127 "bdev_name": "Malloc1", 00:15:43.127 "name": "Malloc1", 00:15:43.127 "nguid": "4E0FA8D2906E49B384DE9D4BF44937C8", 00:15:43.127 "uuid": "4e0fa8d2-906e-49b3-84de-9d4bf44937c8" 00:15:43.127 }, 00:15:43.127 { 00:15:43.127 "nsid": 2, 00:15:43.127 "bdev_name": "Malloc3", 00:15:43.127 "name": "Malloc3", 00:15:43.127 "nguid": "9A7959213E164A27BCC0E4D627AAA5F0", 00:15:43.127 "uuid": "9a795921-3e16-4a27-bcc0-e4d627aaa5f0" 00:15:43.127 } 00:15:43.127 ] 00:15:43.127 }, 00:15:43.127 { 00:15:43.127 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.127 "subtype": "NVMe", 00:15:43.127 "listen_addresses": [ 00:15:43.127 { 00:15:43.127 "trtype": "VFIOUSER", 00:15:43.127 "adrfam": "IPv4", 00:15:43.127 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.127 "trsvcid": "0" 00:15:43.127 } 00:15:43.127 ], 00:15:43.127 "allow_any_host": true, 00:15:43.127 "hosts": [], 00:15:43.127 "serial_number": "SPDK2", 00:15:43.127 "model_number": "SPDK bdev Controller", 00:15:43.127 "max_namespaces": 32, 00:15:43.127 "min_cntlid": 1, 00:15:43.127 "max_cntlid": 65519, 00:15:43.127 "namespaces": [ 00:15:43.127 { 00:15:43.127 "nsid": 1, 00:15:43.127 "bdev_name": "Malloc2", 00:15:43.127 "name": "Malloc2", 00:15:43.127 "nguid": "30BA6270C13C4EDA998095EE2CCDE0F4", 00:15:43.127 "uuid": "30ba6270-c13c-4eda-9980-95ee2ccde0f4" 00:15:43.127 } 00:15:43.127 ] 00:15:43.127 } 00:15:43.127 ] 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=350088 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.127 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:43.388 [2024-11-15 10:56:02.712942] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.388 Malloc4 00:15:43.388 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:43.388 [2024-11-15 10:56:02.898161] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.648 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.648 Asynchronous Event Request test 00:15:43.648 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.648 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.648 Registering asynchronous event callbacks... 00:15:43.648 Starting namespace attribute notice tests for all controllers... 00:15:43.648 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.648 aer_cb - Changed Namespace 00:15:43.648 Cleaning up... 00:15:43.648 [ 00:15:43.648 { 00:15:43.648 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.648 "subtype": "Discovery", 00:15:43.648 "listen_addresses": [], 00:15:43.648 "allow_any_host": true, 00:15:43.648 "hosts": [] 00:15:43.648 }, 00:15:43.648 { 00:15:43.648 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.648 "subtype": "NVMe", 00:15:43.648 "listen_addresses": [ 00:15:43.648 { 00:15:43.648 "trtype": "VFIOUSER", 00:15:43.648 "adrfam": "IPv4", 00:15:43.648 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.648 "trsvcid": "0" 00:15:43.648 } 00:15:43.648 ], 00:15:43.648 "allow_any_host": true, 00:15:43.648 "hosts": [], 00:15:43.648 "serial_number": "SPDK1", 00:15:43.648 "model_number": "SPDK bdev Controller", 00:15:43.648 "max_namespaces": 32, 00:15:43.648 "min_cntlid": 1, 00:15:43.648 "max_cntlid": 65519, 00:15:43.648 "namespaces": [ 00:15:43.648 { 00:15:43.648 "nsid": 1, 00:15:43.648 "bdev_name": "Malloc1", 00:15:43.648 "name": "Malloc1", 00:15:43.648 "nguid": "4E0FA8D2906E49B384DE9D4BF44937C8", 00:15:43.648 "uuid": "4e0fa8d2-906e-49b3-84de-9d4bf44937c8" 00:15:43.648 }, 00:15:43.648 { 00:15:43.648 "nsid": 2, 00:15:43.648 "bdev_name": "Malloc3", 00:15:43.648 "name": "Malloc3", 00:15:43.648 "nguid": "9A7959213E164A27BCC0E4D627AAA5F0", 00:15:43.648 "uuid": "9a795921-3e16-4a27-bcc0-e4d627aaa5f0" 00:15:43.648 } 00:15:43.648 ] 00:15:43.648 }, 00:15:43.648 { 00:15:43.649 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.649 "subtype": "NVMe", 00:15:43.649 "listen_addresses": [ 00:15:43.649 { 00:15:43.649 "trtype": "VFIOUSER", 00:15:43.649 "adrfam": "IPv4", 00:15:43.649 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.649 "trsvcid": "0" 00:15:43.649 } 00:15:43.649 ], 00:15:43.649 "allow_any_host": true, 00:15:43.649 "hosts": [], 00:15:43.649 "serial_number": "SPDK2", 00:15:43.649 "model_number": "SPDK bdev Controller", 00:15:43.649 "max_namespaces": 32, 00:15:43.649 "min_cntlid": 1, 00:15:43.649 "max_cntlid": 65519, 00:15:43.649 "namespaces": [ 00:15:43.649 { 00:15:43.649 "nsid": 1, 00:15:43.649 "bdev_name": "Malloc2", 00:15:43.649 "name": "Malloc2", 00:15:43.649 "nguid": "30BA6270C13C4EDA998095EE2CCDE0F4", 00:15:43.649 "uuid": "30ba6270-c13c-4eda-9980-95ee2ccde0f4" 00:15:43.649 }, 00:15:43.649 { 00:15:43.649 "nsid": 2, 00:15:43.649 "bdev_name": "Malloc4", 00:15:43.649 "name": "Malloc4", 00:15:43.649 "nguid": "A0E8B7D4C8B84E438D5D112690021E5A", 00:15:43.649 "uuid": "a0e8b7d4-c8b8-4e43-8d5d-112690021e5a" 00:15:43.649 } 00:15:43.649 ] 00:15:43.649 } 00:15:43.649 ] 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 350088 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 340994 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 340994 ']' 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 340994 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 340994 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 340994' 00:15:43.649 killing process with pid 340994 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 340994 00:15:43.649 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 340994 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=350110 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 350110' 00:15:43.909 Process pid: 350110 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 350110 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 350110 ']' 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.909 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:43.909 [2024-11-15 10:56:03.346254] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:43.909 [2024-11-15 10:56:03.346957] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:15:43.909 [2024-11-15 10:56:03.346991] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.909 [2024-11-15 10:56:03.398639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.909 [2024-11-15 10:56:03.427730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.909 [2024-11-15 10:56:03.427756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.909 [2024-11-15 10:56:03.427762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.909 [2024-11-15 10:56:03.427767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.909 [2024-11-15 10:56:03.427771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.909 [2024-11-15 10:56:03.429174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.909 [2024-11-15 10:56:03.429329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.909 [2024-11-15 10:56:03.429477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.909 [2024-11-15 10:56:03.429479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.169 [2024-11-15 10:56:03.481269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:44.169 [2024-11-15 10:56:03.482197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:44.169 [2024-11-15 10:56:03.482622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:44.169 [2024-11-15 10:56:03.483898] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:44.169 [2024-11-15 10:56:03.483916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:44.169 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.169 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:44.169 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:45.110 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:45.370 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:45.370 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:45.370 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.370 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:45.370 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.629 Malloc1 00:15:45.629 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:45.629 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:45.890 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:46.150 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.150 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:46.150 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:46.150 Malloc2 00:15:46.410 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:46.410 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:46.672 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 350110 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 350110 ']' 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 350110 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 350110 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 350110' 00:15:46.932 killing process with pid 350110 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 350110 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 350110 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:46.932 00:15:46.932 real 0m50.381s 00:15:46.932 user 3m15.527s 00:15:46.932 sys 0m2.614s 00:15:46.932 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:46.933 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:46.933 ************************************ 00:15:46.933 END TEST nvmf_vfio_user 00:15:46.933 ************************************ 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.194 ************************************ 00:15:47.194 START TEST nvmf_vfio_user_nvme_compliance 00:15:47.194 ************************************ 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:47.194 * Looking for test storage... 00:15:47.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.194 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:47.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.195 --rc genhtml_branch_coverage=1 00:15:47.195 --rc genhtml_function_coverage=1 00:15:47.195 --rc genhtml_legend=1 00:15:47.195 --rc geninfo_all_blocks=1 00:15:47.195 --rc geninfo_unexecuted_blocks=1 00:15:47.195 00:15:47.195 ' 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:47.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.195 --rc genhtml_branch_coverage=1 00:15:47.195 --rc genhtml_function_coverage=1 00:15:47.195 --rc genhtml_legend=1 00:15:47.195 --rc geninfo_all_blocks=1 00:15:47.195 --rc geninfo_unexecuted_blocks=1 00:15:47.195 00:15:47.195 ' 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:47.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.195 --rc genhtml_branch_coverage=1 00:15:47.195 --rc genhtml_function_coverage=1 00:15:47.195 --rc genhtml_legend=1 00:15:47.195 --rc geninfo_all_blocks=1 00:15:47.195 --rc geninfo_unexecuted_blocks=1 00:15:47.195 00:15:47.195 ' 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:47.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.195 --rc genhtml_branch_coverage=1 00:15:47.195 --rc genhtml_function_coverage=1 00:15:47.195 --rc genhtml_legend=1 00:15:47.195 --rc geninfo_all_blocks=1 00:15:47.195 --rc geninfo_unexecuted_blocks=1 00:15:47.195 00:15:47.195 ' 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.195 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.456 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=350862 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 350862' 00:15:47.457 Process pid: 350862 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 350862 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 350862 ']' 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:47.457 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.457 [2024-11-15 10:56:06.807074] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:15:47.457 [2024-11-15 10:56:06.807124] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.457 [2024-11-15 10:56:06.891527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.457 [2024-11-15 10:56:06.922575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.457 [2024-11-15 10:56:06.922606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.457 [2024-11-15 10:56:06.922613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.457 [2024-11-15 10:56:06.922623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.457 [2024-11-15 10:56:06.922628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.457 [2024-11-15 10:56:06.923721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.457 [2024-11-15 10:56:06.923845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.457 [2024-11-15 10:56:06.923847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.400 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:48.400 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:48.400 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.341 malloc0 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.341 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.342 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:49.342 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.342 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.342 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.342 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:49.342 00:15:49.342 00:15:49.342 CUnit - A unit testing framework for C - Version 2.1-3 00:15:49.342 http://cunit.sourceforge.net/ 00:15:49.342 00:15:49.342 00:15:49.342 Suite: nvme_compliance 00:15:49.342 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 10:56:08.851031] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.342 [2024-11-15 10:56:08.852331] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:49.342 [2024-11-15 10:56:08.852344] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:49.342 [2024-11-15 10:56:08.852349] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:49.342 [2024-11-15 10:56:08.854047] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.602 passed 00:15:49.602 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 10:56:08.934528] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.602 [2024-11-15 10:56:08.937547] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.602 passed 00:15:49.602 Test: admin_identify_ns ...[2024-11-15 10:56:09.018411] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.602 [2024-11-15 10:56:09.078570] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:49.602 [2024-11-15 10:56:09.086570] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:49.602 [2024-11-15 10:56:09.107656] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.863 passed 00:15:49.863 Test: admin_get_features_mandatory_features ...[2024-11-15 10:56:09.182915] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.863 [2024-11-15 10:56:09.185936] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.863 passed 00:15:49.863 Test: admin_get_features_optional_features ...[2024-11-15 10:56:09.266432] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.863 [2024-11-15 10:56:09.269459] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.863 passed 00:15:49.863 Test: admin_set_features_number_of_queues ...[2024-11-15 10:56:09.345182] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.123 [2024-11-15 10:56:09.450655] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.123 passed 00:15:50.123 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 10:56:09.529907] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.123 [2024-11-15 10:56:09.532935] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.123 passed 00:15:50.123 Test: admin_get_log_page_with_lpo ...[2024-11-15 10:56:09.609099] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.383 [2024-11-15 10:56:09.677577] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:50.383 [2024-11-15 10:56:09.690611] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.383 passed 00:15:50.383 Test: fabric_property_get ...[2024-11-15 10:56:09.766831] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.383 [2024-11-15 10:56:09.768020] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:50.383 [2024-11-15 10:56:09.769848] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.383 passed 00:15:50.383 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 10:56:09.850332] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.383 [2024-11-15 10:56:09.851535] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:50.383 [2024-11-15 10:56:09.853359] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.383 passed 00:15:50.644 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 10:56:09.927086] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.644 [2024-11-15 10:56:10.010576] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:50.644 [2024-11-15 10:56:10.026566] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:50.644 [2024-11-15 10:56:10.031641] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.644 passed 00:15:50.644 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 10:56:10.106870] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.644 [2024-11-15 10:56:10.108080] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:50.644 [2024-11-15 10:56:10.109895] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.644 passed 00:15:50.905 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 10:56:10.185669] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.905 [2024-11-15 10:56:10.264570] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:50.905 [2024-11-15 10:56:10.288569] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:50.905 [2024-11-15 10:56:10.293637] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.905 passed 00:15:50.905 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 10:56:10.368810] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.905 [2024-11-15 10:56:10.370012] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:50.905 [2024-11-15 10:56:10.370031] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:50.905 [2024-11-15 10:56:10.371832] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.905 passed 00:15:51.165 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 10:56:10.448574] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.165 [2024-11-15 10:56:10.542573] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:51.165 [2024-11-15 10:56:10.550572] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:51.165 [2024-11-15 10:56:10.558568] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:51.165 [2024-11-15 10:56:10.566569] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:51.165 [2024-11-15 10:56:10.595643] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.165 passed 00:15:51.165 Test: admin_create_io_sq_verify_pc ...[2024-11-15 10:56:10.668847] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.165 [2024-11-15 10:56:10.685578] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:51.425 [2024-11-15 10:56:10.702999] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.425 passed 00:15:51.425 Test: admin_create_io_qp_max_qps ...[2024-11-15 10:56:10.778439] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.366 [2024-11-15 10:56:11.880571] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:52.936 [2024-11-15 10:56:12.263612] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.937 passed 00:15:52.937 Test: admin_create_io_sq_shared_cq ...[2024-11-15 10:56:12.342856] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.198 [2024-11-15 10:56:12.475574] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.198 [2024-11-15 10:56:12.512613] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.198 passed 00:15:53.198 00:15:53.198 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.198 suites 1 1 n/a 0 0 00:15:53.198 tests 18 18 18 0 0 00:15:53.198 asserts 360 360 360 0 n/a 00:15:53.198 00:15:53.198 Elapsed time = 1.506 seconds 00:15:53.198 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 350862 00:15:53.198 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 350862 ']' 00:15:53.198 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 350862 00:15:53.198 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:53.198 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:53.199 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 350862 00:15:53.199 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:53.199 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:53.199 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 350862' 00:15:53.199 killing process with pid 350862 00:15:53.199 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 350862 00:15:53.199 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 350862 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:53.460 00:15:53.460 real 0m6.219s 00:15:53.460 user 0m17.673s 00:15:53.460 sys 0m0.504s 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 ************************************ 00:15:53.460 END TEST nvmf_vfio_user_nvme_compliance 00:15:53.460 ************************************ 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 ************************************ 00:15:53.460 START TEST nvmf_vfio_user_fuzz 00:15:53.460 ************************************ 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:53.460 * Looking for test storage... 00:15:53.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:53.460 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.722 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.722 --rc genhtml_branch_coverage=1 00:15:53.722 --rc genhtml_function_coverage=1 00:15:53.722 --rc genhtml_legend=1 00:15:53.722 --rc geninfo_all_blocks=1 00:15:53.722 --rc geninfo_unexecuted_blocks=1 00:15:53.722 00:15:53.722 ' 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.722 --rc genhtml_branch_coverage=1 00:15:53.722 --rc genhtml_function_coverage=1 00:15:53.722 --rc genhtml_legend=1 00:15:53.722 --rc geninfo_all_blocks=1 00:15:53.722 --rc geninfo_unexecuted_blocks=1 00:15:53.722 00:15:53.722 ' 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.722 --rc genhtml_branch_coverage=1 00:15:53.722 --rc genhtml_function_coverage=1 00:15:53.722 --rc genhtml_legend=1 00:15:53.722 --rc geninfo_all_blocks=1 00:15:53.722 --rc geninfo_unexecuted_blocks=1 00:15:53.722 00:15:53.722 ' 00:15:53.722 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.722 --rc genhtml_branch_coverage=1 00:15:53.722 --rc genhtml_function_coverage=1 00:15:53.722 --rc genhtml_legend=1 00:15:53.722 --rc geninfo_all_blocks=1 00:15:53.722 --rc geninfo_unexecuted_blocks=1 00:15:53.722 00:15:53.723 ' 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=352262 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 352262' 00:15:53.723 Process pid: 352262 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 352262 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 352262 ']' 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.723 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.667 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:54.667 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.606 malloc0 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:55.606 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:27.715 Fuzzing completed. Shutting down the fuzz application 00:16:27.715 00:16:27.715 Dumping successful admin opcodes: 00:16:27.715 8, 9, 10, 24, 00:16:27.715 Dumping successful io opcodes: 00:16:27.715 0, 00:16:27.715 NS: 0x20000081ef00 I/O qp, Total commands completed: 1455078, total successful commands: 5690, random_seed: 2392948672 00:16:27.715 NS: 0x20000081ef00 admin qp, Total commands completed: 362177, total successful commands: 2925, random_seed: 156155648 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 352262 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 352262 ']' 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 352262 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 352262 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 352262' 00:16:27.715 killing process with pid 352262 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 352262 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 352262 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:27.715 00:16:27.715 real 0m32.820s 00:16:27.715 user 0m38.109s 00:16:27.715 sys 0m24.754s 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.715 ************************************ 00:16:27.715 END TEST nvmf_vfio_user_fuzz 00:16:27.715 ************************************ 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.715 ************************************ 00:16:27.715 START TEST nvmf_auth_target 00:16:27.715 ************************************ 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:27.715 * Looking for test storage... 00:16:27.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:27.715 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:27.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.716 --rc genhtml_branch_coverage=1 00:16:27.716 --rc genhtml_function_coverage=1 00:16:27.716 --rc genhtml_legend=1 00:16:27.716 --rc geninfo_all_blocks=1 00:16:27.716 --rc geninfo_unexecuted_blocks=1 00:16:27.716 00:16:27.716 ' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:27.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.716 --rc genhtml_branch_coverage=1 00:16:27.716 --rc genhtml_function_coverage=1 00:16:27.716 --rc genhtml_legend=1 00:16:27.716 --rc geninfo_all_blocks=1 00:16:27.716 --rc geninfo_unexecuted_blocks=1 00:16:27.716 00:16:27.716 ' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:27.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.716 --rc genhtml_branch_coverage=1 00:16:27.716 --rc genhtml_function_coverage=1 00:16:27.716 --rc genhtml_legend=1 00:16:27.716 --rc geninfo_all_blocks=1 00:16:27.716 --rc geninfo_unexecuted_blocks=1 00:16:27.716 00:16:27.716 ' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:27.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.716 --rc genhtml_branch_coverage=1 00:16:27.716 --rc genhtml_function_coverage=1 00:16:27.716 --rc genhtml_legend=1 00:16:27.716 --rc geninfo_all_blocks=1 00:16:27.716 --rc geninfo_unexecuted_blocks=1 00:16:27.716 00:16:27.716 ' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:27.716 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:34.308 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:34.308 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:34.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:34.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.308 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:16:34.309 00:16:34.309 --- 10.0.0.2 ping statistics --- 00:16:34.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.309 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:16:34.309 00:16:34.309 --- 10.0.0.1 ping statistics --- 00:16:34.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.309 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=362242 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 362242 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 362242 ']' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:34.309 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=362494 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dee76a34e4a78ca24a1f5802a55dbcded1e90234eaf0ab60 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6Q9 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dee76a34e4a78ca24a1f5802a55dbcded1e90234eaf0ab60 0 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dee76a34e4a78ca24a1f5802a55dbcded1e90234eaf0ab60 0 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dee76a34e4a78ca24a1f5802a55dbcded1e90234eaf0ab60 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:34.881 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6Q9 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6Q9 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.6Q9 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2e2a320b57b493bbc130f3af84037ad8b873f0d0c97e6748c36444d539447226 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xD6 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2e2a320b57b493bbc130f3af84037ad8b873f0d0c97e6748c36444d539447226 3 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2e2a320b57b493bbc130f3af84037ad8b873f0d0c97e6748c36444d539447226 3 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.142 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2e2a320b57b493bbc130f3af84037ad8b873f0d0c97e6748c36444d539447226 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xD6 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xD6 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.xD6 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ca6524b8e77178c304f546e42a8a69cb 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Yc3 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ca6524b8e77178c304f546e42a8a69cb 1 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ca6524b8e77178c304f546e42a8a69cb 1 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ca6524b8e77178c304f546e42a8a69cb 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Yc3 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Yc3 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Yc3 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=616746090155403e5f83aef81935813176993ee1cc9efe6e 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.R1f 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 616746090155403e5f83aef81935813176993ee1cc9efe6e 2 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 616746090155403e5f83aef81935813176993ee1cc9efe6e 2 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=616746090155403e5f83aef81935813176993ee1cc9efe6e 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.R1f 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.R1f 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.R1f 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:35.143 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4f671afe8b1e03da865080c95aa955a91f60085a90852270 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OPM 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4f671afe8b1e03da865080c95aa955a91f60085a90852270 2 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4f671afe8b1e03da865080c95aa955a91f60085a90852270 2 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4f671afe8b1e03da865080c95aa955a91f60085a90852270 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OPM 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OPM 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.OPM 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=251d2bbc89a7a30d36c656f5764f6708 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jGb 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 251d2bbc89a7a30d36c656f5764f6708 1 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 251d2bbc89a7a30d36c656f5764f6708 1 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=251d2bbc89a7a30d36c656f5764f6708 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jGb 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jGb 00:16:35.413 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.jGb 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8b5c8259546bf83217f5d5df01517c1cdbd1aea75af74785a1e9da966cf18f25 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bhb 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8b5c8259546bf83217f5d5df01517c1cdbd1aea75af74785a1e9da966cf18f25 3 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8b5c8259546bf83217f5d5df01517c1cdbd1aea75af74785a1e9da966cf18f25 3 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8b5c8259546bf83217f5d5df01517c1cdbd1aea75af74785a1e9da966cf18f25 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bhb 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bhb 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bhb 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 362242 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 362242 ']' 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.414 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 362494 /var/tmp/host.sock 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 362494 ']' 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:35.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.675 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6Q9 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6Q9 00:16:35.936 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6Q9 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.xD6 ]] 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xD6 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xD6 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xD6 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Yc3 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Yc3 00:16:36.197 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Yc3 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.R1f ]] 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R1f 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R1f 00:16:36.458 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R1f 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OPM 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OPM 00:16:36.719 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OPM 00:16:36.979 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.jGb ]] 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jGb 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jGb 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jGb 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bhb 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bhb 00:16:36.980 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bhb 00:16:37.240 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:37.240 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:37.240 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.240 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.240 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.240 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.501 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.762 00:16:37.762 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.762 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.762 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.023 { 00:16:38.023 "cntlid": 1, 00:16:38.023 "qid": 0, 00:16:38.023 "state": "enabled", 00:16:38.023 "thread": "nvmf_tgt_poll_group_000", 00:16:38.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.023 "listen_address": { 00:16:38.023 "trtype": "TCP", 00:16:38.023 "adrfam": "IPv4", 00:16:38.023 "traddr": "10.0.0.2", 00:16:38.023 "trsvcid": "4420" 00:16:38.023 }, 00:16:38.023 "peer_address": { 00:16:38.023 "trtype": "TCP", 00:16:38.023 "adrfam": "IPv4", 00:16:38.023 "traddr": "10.0.0.1", 00:16:38.023 "trsvcid": "59132" 00:16:38.023 }, 00:16:38.023 "auth": { 00:16:38.023 "state": "completed", 00:16:38.023 "digest": "sha256", 00:16:38.023 "dhgroup": "null" 00:16:38.023 } 00:16:38.023 } 00:16:38.023 ]' 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.023 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.284 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:38.284 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.855 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.116 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.376 00:16:39.376 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.376 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.376 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.638 { 00:16:39.638 "cntlid": 3, 00:16:39.638 "qid": 0, 00:16:39.638 "state": "enabled", 00:16:39.638 "thread": "nvmf_tgt_poll_group_000", 00:16:39.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.638 "listen_address": { 00:16:39.638 "trtype": "TCP", 00:16:39.638 "adrfam": "IPv4", 00:16:39.638 "traddr": "10.0.0.2", 00:16:39.638 "trsvcid": "4420" 00:16:39.638 }, 00:16:39.638 "peer_address": { 00:16:39.638 "trtype": "TCP", 00:16:39.638 "adrfam": "IPv4", 00:16:39.638 "traddr": "10.0.0.1", 00:16:39.638 "trsvcid": "59154" 00:16:39.638 }, 00:16:39.638 "auth": { 00:16:39.638 "state": "completed", 00:16:39.638 "digest": "sha256", 00:16:39.638 "dhgroup": "null" 00:16:39.638 } 00:16:39.638 } 00:16:39.638 ]' 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.638 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.638 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.638 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.638 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.638 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.638 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.898 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:39.898 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.470 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.732 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.993 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.993 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.254 { 00:16:41.254 "cntlid": 5, 00:16:41.254 "qid": 0, 00:16:41.254 "state": "enabled", 00:16:41.254 "thread": "nvmf_tgt_poll_group_000", 00:16:41.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.254 "listen_address": { 00:16:41.254 "trtype": "TCP", 00:16:41.254 "adrfam": "IPv4", 00:16:41.254 "traddr": "10.0.0.2", 00:16:41.254 "trsvcid": "4420" 00:16:41.254 }, 00:16:41.254 "peer_address": { 00:16:41.254 "trtype": "TCP", 00:16:41.254 "adrfam": "IPv4", 00:16:41.254 "traddr": "10.0.0.1", 00:16:41.254 "trsvcid": "59182" 00:16:41.254 }, 00:16:41.254 "auth": { 00:16:41.254 "state": "completed", 00:16:41.254 "digest": "sha256", 00:16:41.254 "dhgroup": "null" 00:16:41.254 } 00:16:41.254 } 00:16:41.254 ]' 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.254 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.514 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:16:41.514 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.084 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.344 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.604 00:16:42.604 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.604 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.604 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.604 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.604 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.604 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.604 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.864 { 00:16:42.864 "cntlid": 7, 00:16:42.864 "qid": 0, 00:16:42.864 "state": "enabled", 00:16:42.864 "thread": "nvmf_tgt_poll_group_000", 00:16:42.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.864 "listen_address": { 00:16:42.864 "trtype": "TCP", 00:16:42.864 "adrfam": "IPv4", 00:16:42.864 "traddr": "10.0.0.2", 00:16:42.864 "trsvcid": "4420" 00:16:42.864 }, 00:16:42.864 "peer_address": { 00:16:42.864 "trtype": "TCP", 00:16:42.864 "adrfam": "IPv4", 00:16:42.864 "traddr": "10.0.0.1", 00:16:42.864 "trsvcid": "59204" 00:16:42.864 }, 00:16:42.864 "auth": { 00:16:42.864 "state": "completed", 00:16:42.864 "digest": "sha256", 00:16:42.864 "dhgroup": "null" 00:16:42.864 } 00:16:42.864 } 00:16:42.864 ]' 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.864 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.124 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:16:43.124 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.700 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.701 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.701 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.996 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.270 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.270 { 00:16:44.270 "cntlid": 9, 00:16:44.270 "qid": 0, 00:16:44.270 "state": "enabled", 00:16:44.270 "thread": "nvmf_tgt_poll_group_000", 00:16:44.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.270 "listen_address": { 00:16:44.270 "trtype": "TCP", 00:16:44.270 "adrfam": "IPv4", 00:16:44.270 "traddr": "10.0.0.2", 00:16:44.270 "trsvcid": "4420" 00:16:44.270 }, 00:16:44.270 "peer_address": { 00:16:44.270 "trtype": "TCP", 00:16:44.270 "adrfam": "IPv4", 00:16:44.270 "traddr": "10.0.0.1", 00:16:44.270 "trsvcid": "60186" 00:16:44.270 }, 00:16:44.270 "auth": { 00:16:44.270 "state": "completed", 00:16:44.270 "digest": "sha256", 00:16:44.270 "dhgroup": "ffdhe2048" 00:16:44.270 } 00:16:44.270 } 00:16:44.270 ]' 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.270 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.568 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.568 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.568 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.568 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.569 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.569 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:44.569 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:45.202 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.467 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.727 00:16:45.727 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.727 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.727 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.987 { 00:16:45.987 "cntlid": 11, 00:16:45.987 "qid": 0, 00:16:45.987 "state": "enabled", 00:16:45.987 "thread": "nvmf_tgt_poll_group_000", 00:16:45.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.987 "listen_address": { 00:16:45.987 "trtype": "TCP", 00:16:45.987 "adrfam": "IPv4", 00:16:45.987 "traddr": "10.0.0.2", 00:16:45.987 "trsvcid": "4420" 00:16:45.987 }, 00:16:45.987 "peer_address": { 00:16:45.987 "trtype": "TCP", 00:16:45.987 "adrfam": "IPv4", 00:16:45.987 "traddr": "10.0.0.1", 00:16:45.987 "trsvcid": "60200" 00:16:45.987 }, 00:16:45.987 "auth": { 00:16:45.987 "state": "completed", 00:16:45.987 "digest": "sha256", 00:16:45.987 "dhgroup": "ffdhe2048" 00:16:45.987 } 00:16:45.987 } 00:16:45.987 ]' 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.987 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.247 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:46.247 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.816 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.076 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.336 00:16:47.336 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.336 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.336 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.596 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.596 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.596 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.596 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.596 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.596 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.596 { 00:16:47.596 "cntlid": 13, 00:16:47.596 "qid": 0, 00:16:47.596 "state": "enabled", 00:16:47.596 "thread": "nvmf_tgt_poll_group_000", 00:16:47.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.596 "listen_address": { 00:16:47.596 "trtype": "TCP", 00:16:47.596 "adrfam": "IPv4", 00:16:47.596 "traddr": "10.0.0.2", 00:16:47.596 "trsvcid": "4420" 00:16:47.596 }, 00:16:47.596 "peer_address": { 00:16:47.596 "trtype": "TCP", 00:16:47.596 "adrfam": "IPv4", 00:16:47.596 "traddr": "10.0.0.1", 00:16:47.596 "trsvcid": "60210" 00:16:47.596 }, 00:16:47.596 "auth": { 00:16:47.597 "state": "completed", 00:16:47.597 "digest": "sha256", 00:16:47.597 "dhgroup": "ffdhe2048" 00:16:47.597 } 00:16:47.597 } 00:16:47.597 ]' 00:16:47.597 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.597 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.597 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.597 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.597 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.597 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.597 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.597 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.857 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:16:47.857 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:16:48.427 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.428 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.688 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.689 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.689 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.689 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.949 00:16:48.949 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.949 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.949 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.210 { 00:16:49.210 "cntlid": 15, 00:16:49.210 "qid": 0, 00:16:49.210 "state": "enabled", 00:16:49.210 "thread": "nvmf_tgt_poll_group_000", 00:16:49.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.210 "listen_address": { 00:16:49.210 "trtype": "TCP", 00:16:49.210 "adrfam": "IPv4", 00:16:49.210 "traddr": "10.0.0.2", 00:16:49.210 "trsvcid": "4420" 00:16:49.210 }, 00:16:49.210 "peer_address": { 00:16:49.210 "trtype": "TCP", 00:16:49.210 "adrfam": "IPv4", 00:16:49.210 "traddr": "10.0.0.1", 00:16:49.210 "trsvcid": "60234" 00:16:49.210 }, 00:16:49.210 "auth": { 00:16:49.210 "state": "completed", 00:16:49.210 "digest": "sha256", 00:16:49.210 "dhgroup": "ffdhe2048" 00:16:49.210 } 00:16:49.210 } 00:16:49.210 ]' 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.210 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.470 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:16:49.470 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.039 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.299 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.559 00:16:50.559 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.559 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.559 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.820 { 00:16:50.820 "cntlid": 17, 00:16:50.820 "qid": 0, 00:16:50.820 "state": "enabled", 00:16:50.820 "thread": "nvmf_tgt_poll_group_000", 00:16:50.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.820 "listen_address": { 00:16:50.820 "trtype": "TCP", 00:16:50.820 "adrfam": "IPv4", 00:16:50.820 "traddr": "10.0.0.2", 00:16:50.820 "trsvcid": "4420" 00:16:50.820 }, 00:16:50.820 "peer_address": { 00:16:50.820 "trtype": "TCP", 00:16:50.820 "adrfam": "IPv4", 00:16:50.820 "traddr": "10.0.0.1", 00:16:50.820 "trsvcid": "60260" 00:16:50.820 }, 00:16:50.820 "auth": { 00:16:50.820 "state": "completed", 00:16:50.820 "digest": "sha256", 00:16:50.820 "dhgroup": "ffdhe3072" 00:16:50.820 } 00:16:50.820 } 00:16:50.820 ]' 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.820 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.079 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:51.079 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.650 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.910 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.171 00:16:52.171 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.171 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.171 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.431 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.431 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.431 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.431 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.431 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.432 { 00:16:52.432 "cntlid": 19, 00:16:52.432 "qid": 0, 00:16:52.432 "state": "enabled", 00:16:52.432 "thread": "nvmf_tgt_poll_group_000", 00:16:52.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.432 "listen_address": { 00:16:52.432 "trtype": "TCP", 00:16:52.432 "adrfam": "IPv4", 00:16:52.432 "traddr": "10.0.0.2", 00:16:52.432 "trsvcid": "4420" 00:16:52.432 }, 00:16:52.432 "peer_address": { 00:16:52.432 "trtype": "TCP", 00:16:52.432 "adrfam": "IPv4", 00:16:52.432 "traddr": "10.0.0.1", 00:16:52.432 "trsvcid": "60280" 00:16:52.432 }, 00:16:52.432 "auth": { 00:16:52.432 "state": "completed", 00:16:52.432 "digest": "sha256", 00:16:52.432 "dhgroup": "ffdhe3072" 00:16:52.432 } 00:16:52.432 } 00:16:52.432 ]' 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.432 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.691 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:52.691 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:53.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.522 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.783 00:16:53.783 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.783 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.783 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.043 { 00:16:54.043 "cntlid": 21, 00:16:54.043 "qid": 0, 00:16:54.043 "state": "enabled", 00:16:54.043 "thread": "nvmf_tgt_poll_group_000", 00:16:54.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.043 "listen_address": { 00:16:54.043 "trtype": "TCP", 00:16:54.043 "adrfam": "IPv4", 00:16:54.043 "traddr": "10.0.0.2", 00:16:54.043 "trsvcid": "4420" 00:16:54.043 }, 00:16:54.043 "peer_address": { 00:16:54.043 "trtype": "TCP", 00:16:54.043 "adrfam": "IPv4", 00:16:54.043 "traddr": "10.0.0.1", 00:16:54.043 "trsvcid": "42318" 00:16:54.043 }, 00:16:54.043 "auth": { 00:16:54.043 "state": "completed", 00:16:54.043 "digest": "sha256", 00:16:54.043 "dhgroup": "ffdhe3072" 00:16:54.043 } 00:16:54.043 } 00:16:54.043 ]' 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.043 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.305 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:16:54.305 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:54.876 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.136 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.397 00:16:55.397 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.397 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.397 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.657 { 00:16:55.657 "cntlid": 23, 00:16:55.657 "qid": 0, 00:16:55.657 "state": "enabled", 00:16:55.657 "thread": "nvmf_tgt_poll_group_000", 00:16:55.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.657 "listen_address": { 00:16:55.657 "trtype": "TCP", 00:16:55.657 "adrfam": "IPv4", 00:16:55.657 "traddr": "10.0.0.2", 00:16:55.657 "trsvcid": "4420" 00:16:55.657 }, 00:16:55.657 "peer_address": { 00:16:55.657 "trtype": "TCP", 00:16:55.657 "adrfam": "IPv4", 00:16:55.657 "traddr": "10.0.0.1", 00:16:55.657 "trsvcid": "42334" 00:16:55.657 }, 00:16:55.657 "auth": { 00:16:55.657 "state": "completed", 00:16:55.657 "digest": "sha256", 00:16:55.657 "dhgroup": "ffdhe3072" 00:16:55.657 } 00:16:55.657 } 00:16:55.657 ]' 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.657 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.917 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:16:55.917 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:16:56.485 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.485 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.485 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.485 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.485 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.485 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.485 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.485 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.485 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.746 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.006 00:16:57.006 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.006 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.006 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.266 { 00:16:57.266 "cntlid": 25, 00:16:57.266 "qid": 0, 00:16:57.266 "state": "enabled", 00:16:57.266 "thread": "nvmf_tgt_poll_group_000", 00:16:57.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.266 "listen_address": { 00:16:57.266 "trtype": "TCP", 00:16:57.266 "adrfam": "IPv4", 00:16:57.266 "traddr": "10.0.0.2", 00:16:57.266 "trsvcid": "4420" 00:16:57.266 }, 00:16:57.266 "peer_address": { 00:16:57.266 "trtype": "TCP", 00:16:57.266 "adrfam": "IPv4", 00:16:57.266 "traddr": "10.0.0.1", 00:16:57.266 "trsvcid": "42354" 00:16:57.266 }, 00:16:57.266 "auth": { 00:16:57.266 "state": "completed", 00:16:57.266 "digest": "sha256", 00:16:57.266 "dhgroup": "ffdhe4096" 00:16:57.266 } 00:16:57.266 } 00:16:57.266 ]' 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.266 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.526 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.526 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.526 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.526 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:57.526 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:16:58.096 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.096 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.096 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.096 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.356 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.616 00:16:58.616 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.616 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.616 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.876 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.876 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.876 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.876 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.876 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.876 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.877 { 00:16:58.877 "cntlid": 27, 00:16:58.877 "qid": 0, 00:16:58.877 "state": "enabled", 00:16:58.877 "thread": "nvmf_tgt_poll_group_000", 00:16:58.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.877 "listen_address": { 00:16:58.877 "trtype": "TCP", 00:16:58.877 "adrfam": "IPv4", 00:16:58.877 "traddr": "10.0.0.2", 00:16:58.877 "trsvcid": "4420" 00:16:58.877 }, 00:16:58.877 "peer_address": { 00:16:58.877 "trtype": "TCP", 00:16:58.877 "adrfam": "IPv4", 00:16:58.877 "traddr": "10.0.0.1", 00:16:58.877 "trsvcid": "42396" 00:16:58.877 }, 00:16:58.877 "auth": { 00:16:58.877 "state": "completed", 00:16:58.877 "digest": "sha256", 00:16:58.877 "dhgroup": "ffdhe4096" 00:16:58.877 } 00:16:58.877 } 00:16:58.877 ]' 00:16:58.877 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.877 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.877 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.877 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.877 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.136 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.137 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.137 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.137 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:16:59.137 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.076 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.336 00:17:00.336 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.336 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.336 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.596 { 00:17:00.596 "cntlid": 29, 00:17:00.596 "qid": 0, 00:17:00.596 "state": "enabled", 00:17:00.596 "thread": "nvmf_tgt_poll_group_000", 00:17:00.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.596 "listen_address": { 00:17:00.596 "trtype": "TCP", 00:17:00.596 "adrfam": "IPv4", 00:17:00.596 "traddr": "10.0.0.2", 00:17:00.596 "trsvcid": "4420" 00:17:00.596 }, 00:17:00.596 "peer_address": { 00:17:00.596 "trtype": "TCP", 00:17:00.596 "adrfam": "IPv4", 00:17:00.596 "traddr": "10.0.0.1", 00:17:00.596 "trsvcid": "42432" 00:17:00.596 }, 00:17:00.596 "auth": { 00:17:00.596 "state": "completed", 00:17:00.596 "digest": "sha256", 00:17:00.596 "dhgroup": "ffdhe4096" 00:17:00.596 } 00:17:00.596 } 00:17:00.596 ]' 00:17:00.596 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.596 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.857 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:00.857 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.426 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.685 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.945 00:17:01.945 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.945 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.945 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.205 { 00:17:02.205 "cntlid": 31, 00:17:02.205 "qid": 0, 00:17:02.205 "state": "enabled", 00:17:02.205 "thread": "nvmf_tgt_poll_group_000", 00:17:02.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.205 "listen_address": { 00:17:02.205 "trtype": "TCP", 00:17:02.205 "adrfam": "IPv4", 00:17:02.205 "traddr": "10.0.0.2", 00:17:02.205 "trsvcid": "4420" 00:17:02.205 }, 00:17:02.205 "peer_address": { 00:17:02.205 "trtype": "TCP", 00:17:02.205 "adrfam": "IPv4", 00:17:02.205 "traddr": "10.0.0.1", 00:17:02.205 "trsvcid": "42460" 00:17:02.205 }, 00:17:02.205 "auth": { 00:17:02.205 "state": "completed", 00:17:02.205 "digest": "sha256", 00:17:02.205 "dhgroup": "ffdhe4096" 00:17:02.205 } 00:17:02.205 } 00:17:02.205 ]' 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.205 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.464 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:02.464 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.034 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.293 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.553 00:17:03.553 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.553 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.553 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.813 { 00:17:03.813 "cntlid": 33, 00:17:03.813 "qid": 0, 00:17:03.813 "state": "enabled", 00:17:03.813 "thread": "nvmf_tgt_poll_group_000", 00:17:03.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.813 "listen_address": { 00:17:03.813 "trtype": "TCP", 00:17:03.813 "adrfam": "IPv4", 00:17:03.813 "traddr": "10.0.0.2", 00:17:03.813 "trsvcid": "4420" 00:17:03.813 }, 00:17:03.813 "peer_address": { 00:17:03.813 "trtype": "TCP", 00:17:03.813 "adrfam": "IPv4", 00:17:03.813 "traddr": "10.0.0.1", 00:17:03.813 "trsvcid": "58232" 00:17:03.813 }, 00:17:03.813 "auth": { 00:17:03.813 "state": "completed", 00:17:03.813 "digest": "sha256", 00:17:03.813 "dhgroup": "ffdhe6144" 00:17:03.813 } 00:17:03.813 } 00:17:03.813 ]' 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.813 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.073 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.073 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.073 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.073 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:04.073 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.012 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.271 00:17:05.271 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.271 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.271 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.530 { 00:17:05.530 "cntlid": 35, 00:17:05.530 "qid": 0, 00:17:05.530 "state": "enabled", 00:17:05.530 "thread": "nvmf_tgt_poll_group_000", 00:17:05.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.530 "listen_address": { 00:17:05.530 "trtype": "TCP", 00:17:05.530 "adrfam": "IPv4", 00:17:05.530 "traddr": "10.0.0.2", 00:17:05.530 "trsvcid": "4420" 00:17:05.530 }, 00:17:05.530 "peer_address": { 00:17:05.530 "trtype": "TCP", 00:17:05.530 "adrfam": "IPv4", 00:17:05.530 "traddr": "10.0.0.1", 00:17:05.530 "trsvcid": "58260" 00:17:05.530 }, 00:17:05.530 "auth": { 00:17:05.530 "state": "completed", 00:17:05.530 "digest": "sha256", 00:17:05.530 "dhgroup": "ffdhe6144" 00:17:05.530 } 00:17:05.530 } 00:17:05.530 ]' 00:17:05.530 10:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.530 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.530 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.530 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.530 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.790 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.790 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.790 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.790 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:05.790 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.729 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.729 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.989 00:17:06.989 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.989 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.989 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.249 { 00:17:07.249 "cntlid": 37, 00:17:07.249 "qid": 0, 00:17:07.249 "state": "enabled", 00:17:07.249 "thread": "nvmf_tgt_poll_group_000", 00:17:07.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.249 "listen_address": { 00:17:07.249 "trtype": "TCP", 00:17:07.249 "adrfam": "IPv4", 00:17:07.249 "traddr": "10.0.0.2", 00:17:07.249 "trsvcid": "4420" 00:17:07.249 }, 00:17:07.249 "peer_address": { 00:17:07.249 "trtype": "TCP", 00:17:07.249 "adrfam": "IPv4", 00:17:07.249 "traddr": "10.0.0.1", 00:17:07.249 "trsvcid": "58294" 00:17:07.249 }, 00:17:07.249 "auth": { 00:17:07.249 "state": "completed", 00:17:07.249 "digest": "sha256", 00:17:07.249 "dhgroup": "ffdhe6144" 00:17:07.249 } 00:17:07.249 } 00:17:07.249 ]' 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.249 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.509 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.509 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.509 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.509 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.509 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.509 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:07.509 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.449 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.708 00:17:08.708 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.709 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.709 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.969 { 00:17:08.969 "cntlid": 39, 00:17:08.969 "qid": 0, 00:17:08.969 "state": "enabled", 00:17:08.969 "thread": "nvmf_tgt_poll_group_000", 00:17:08.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.969 "listen_address": { 00:17:08.969 "trtype": "TCP", 00:17:08.969 "adrfam": "IPv4", 00:17:08.969 "traddr": "10.0.0.2", 00:17:08.969 "trsvcid": "4420" 00:17:08.969 }, 00:17:08.969 "peer_address": { 00:17:08.969 "trtype": "TCP", 00:17:08.969 "adrfam": "IPv4", 00:17:08.969 "traddr": "10.0.0.1", 00:17:08.969 "trsvcid": "58322" 00:17:08.969 }, 00:17:08.969 "auth": { 00:17:08.969 "state": "completed", 00:17:08.969 "digest": "sha256", 00:17:08.969 "dhgroup": "ffdhe6144" 00:17:08.969 } 00:17:08.969 } 00:17:08.969 ]' 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.969 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.230 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.230 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.230 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.230 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.230 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.491 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:09.491 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.063 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.323 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.584 00:17:10.584 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.584 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.584 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.845 { 00:17:10.845 "cntlid": 41, 00:17:10.845 "qid": 0, 00:17:10.845 "state": "enabled", 00:17:10.845 "thread": "nvmf_tgt_poll_group_000", 00:17:10.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.845 "listen_address": { 00:17:10.845 "trtype": "TCP", 00:17:10.845 "adrfam": "IPv4", 00:17:10.845 "traddr": "10.0.0.2", 00:17:10.845 "trsvcid": "4420" 00:17:10.845 }, 00:17:10.845 "peer_address": { 00:17:10.845 "trtype": "TCP", 00:17:10.845 "adrfam": "IPv4", 00:17:10.845 "traddr": "10.0.0.1", 00:17:10.845 "trsvcid": "58332" 00:17:10.845 }, 00:17:10.845 "auth": { 00:17:10.845 "state": "completed", 00:17:10.845 "digest": "sha256", 00:17:10.845 "dhgroup": "ffdhe8192" 00:17:10.845 } 00:17:10.845 } 00:17:10.845 ]' 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.845 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.106 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.106 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.106 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.106 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.106 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.367 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:11.367 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.938 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.199 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.459 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.721 { 00:17:12.721 "cntlid": 43, 00:17:12.721 "qid": 0, 00:17:12.721 "state": "enabled", 00:17:12.721 "thread": "nvmf_tgt_poll_group_000", 00:17:12.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.721 "listen_address": { 00:17:12.721 "trtype": "TCP", 00:17:12.721 "adrfam": "IPv4", 00:17:12.721 "traddr": "10.0.0.2", 00:17:12.721 "trsvcid": "4420" 00:17:12.721 }, 00:17:12.721 "peer_address": { 00:17:12.721 "trtype": "TCP", 00:17:12.721 "adrfam": "IPv4", 00:17:12.721 "traddr": "10.0.0.1", 00:17:12.721 "trsvcid": "58352" 00:17:12.721 }, 00:17:12.721 "auth": { 00:17:12.721 "state": "completed", 00:17:12.721 "digest": "sha256", 00:17:12.721 "dhgroup": "ffdhe8192" 00:17:12.721 } 00:17:12.721 } 00:17:12.721 ]' 00:17:12.721 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.982 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.242 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:13.242 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.812 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.072 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.333 00:17:14.593 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.593 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.593 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.593 { 00:17:14.593 "cntlid": 45, 00:17:14.593 "qid": 0, 00:17:14.593 "state": "enabled", 00:17:14.593 "thread": "nvmf_tgt_poll_group_000", 00:17:14.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.593 "listen_address": { 00:17:14.593 "trtype": "TCP", 00:17:14.593 "adrfam": "IPv4", 00:17:14.593 "traddr": "10.0.0.2", 00:17:14.593 "trsvcid": "4420" 00:17:14.593 }, 00:17:14.593 "peer_address": { 00:17:14.593 "trtype": "TCP", 00:17:14.593 "adrfam": "IPv4", 00:17:14.593 "traddr": "10.0.0.1", 00:17:14.593 "trsvcid": "38910" 00:17:14.593 }, 00:17:14.593 "auth": { 00:17:14.593 "state": "completed", 00:17:14.593 "digest": "sha256", 00:17:14.593 "dhgroup": "ffdhe8192" 00:17:14.593 } 00:17:14.593 } 00:17:14.593 ]' 00:17:14.593 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.853 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.113 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:15.113 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.681 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.941 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.201 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.461 { 00:17:16.461 "cntlid": 47, 00:17:16.461 "qid": 0, 00:17:16.461 "state": "enabled", 00:17:16.461 "thread": "nvmf_tgt_poll_group_000", 00:17:16.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.461 "listen_address": { 00:17:16.461 "trtype": "TCP", 00:17:16.461 "adrfam": "IPv4", 00:17:16.461 "traddr": "10.0.0.2", 00:17:16.461 "trsvcid": "4420" 00:17:16.461 }, 00:17:16.461 "peer_address": { 00:17:16.461 "trtype": "TCP", 00:17:16.461 "adrfam": "IPv4", 00:17:16.461 "traddr": "10.0.0.1", 00:17:16.461 "trsvcid": "38934" 00:17:16.461 }, 00:17:16.461 "auth": { 00:17:16.461 "state": "completed", 00:17:16.461 "digest": "sha256", 00:17:16.461 "dhgroup": "ffdhe8192" 00:17:16.461 } 00:17:16.461 } 00:17:16.461 ]' 00:17:16.461 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.721 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.981 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:16.981 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.550 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.811 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.071 { 00:17:18.071 "cntlid": 49, 00:17:18.071 "qid": 0, 00:17:18.071 "state": "enabled", 00:17:18.071 "thread": "nvmf_tgt_poll_group_000", 00:17:18.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.071 "listen_address": { 00:17:18.071 "trtype": "TCP", 00:17:18.071 "adrfam": "IPv4", 00:17:18.071 "traddr": "10.0.0.2", 00:17:18.071 "trsvcid": "4420" 00:17:18.071 }, 00:17:18.071 "peer_address": { 00:17:18.071 "trtype": "TCP", 00:17:18.071 "adrfam": "IPv4", 00:17:18.071 "traddr": "10.0.0.1", 00:17:18.071 "trsvcid": "38962" 00:17:18.071 }, 00:17:18.071 "auth": { 00:17:18.071 "state": "completed", 00:17:18.071 "digest": "sha384", 00:17:18.071 "dhgroup": "null" 00:17:18.071 } 00:17:18.071 } 00:17:18.071 ]' 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.071 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:18.333 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.273 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.274 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.274 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.274 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.274 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.534 00:17:19.534 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.534 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.534 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.795 { 00:17:19.795 "cntlid": 51, 00:17:19.795 "qid": 0, 00:17:19.795 "state": "enabled", 00:17:19.795 "thread": "nvmf_tgt_poll_group_000", 00:17:19.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.795 "listen_address": { 00:17:19.795 "trtype": "TCP", 00:17:19.795 "adrfam": "IPv4", 00:17:19.795 "traddr": "10.0.0.2", 00:17:19.795 "trsvcid": "4420" 00:17:19.795 }, 00:17:19.795 "peer_address": { 00:17:19.795 "trtype": "TCP", 00:17:19.795 "adrfam": "IPv4", 00:17:19.795 "traddr": "10.0.0.1", 00:17:19.795 "trsvcid": "39000" 00:17:19.795 }, 00:17:19.795 "auth": { 00:17:19.795 "state": "completed", 00:17:19.795 "digest": "sha384", 00:17:19.795 "dhgroup": "null" 00:17:19.795 } 00:17:19.795 } 00:17:19.795 ]' 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.795 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.055 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:20.055 10:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.626 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.886 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.146 00:17:21.146 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.146 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.146 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.406 { 00:17:21.406 "cntlid": 53, 00:17:21.406 "qid": 0, 00:17:21.406 "state": "enabled", 00:17:21.406 "thread": "nvmf_tgt_poll_group_000", 00:17:21.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.406 "listen_address": { 00:17:21.406 "trtype": "TCP", 00:17:21.406 "adrfam": "IPv4", 00:17:21.406 "traddr": "10.0.0.2", 00:17:21.406 "trsvcid": "4420" 00:17:21.406 }, 00:17:21.406 "peer_address": { 00:17:21.406 "trtype": "TCP", 00:17:21.406 "adrfam": "IPv4", 00:17:21.406 "traddr": "10.0.0.1", 00:17:21.406 "trsvcid": "39034" 00:17:21.406 }, 00:17:21.406 "auth": { 00:17:21.406 "state": "completed", 00:17:21.406 "digest": "sha384", 00:17:21.406 "dhgroup": "null" 00:17:21.406 } 00:17:21.406 } 00:17:21.406 ]' 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.406 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.666 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:21.666 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.236 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.497 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.757 00:17:22.757 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.757 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.757 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.017 { 00:17:23.017 "cntlid": 55, 00:17:23.017 "qid": 0, 00:17:23.017 "state": "enabled", 00:17:23.017 "thread": "nvmf_tgt_poll_group_000", 00:17:23.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.017 "listen_address": { 00:17:23.017 "trtype": "TCP", 00:17:23.017 "adrfam": "IPv4", 00:17:23.017 "traddr": "10.0.0.2", 00:17:23.017 "trsvcid": "4420" 00:17:23.017 }, 00:17:23.017 "peer_address": { 00:17:23.017 "trtype": "TCP", 00:17:23.017 "adrfam": "IPv4", 00:17:23.017 "traddr": "10.0.0.1", 00:17:23.017 "trsvcid": "39060" 00:17:23.017 }, 00:17:23.017 "auth": { 00:17:23.017 "state": "completed", 00:17:23.017 "digest": "sha384", 00:17:23.017 "dhgroup": "null" 00:17:23.017 } 00:17:23.017 } 00:17:23.017 ]' 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.017 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.278 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:23.278 10:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:23.847 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.847 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.847 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.847 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.107 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.368 00:17:24.368 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.368 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.368 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.628 { 00:17:24.628 "cntlid": 57, 00:17:24.628 "qid": 0, 00:17:24.628 "state": "enabled", 00:17:24.628 "thread": "nvmf_tgt_poll_group_000", 00:17:24.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.628 "listen_address": { 00:17:24.628 "trtype": "TCP", 00:17:24.628 "adrfam": "IPv4", 00:17:24.628 "traddr": "10.0.0.2", 00:17:24.628 "trsvcid": "4420" 00:17:24.628 }, 00:17:24.628 "peer_address": { 00:17:24.628 "trtype": "TCP", 00:17:24.628 "adrfam": "IPv4", 00:17:24.628 "traddr": "10.0.0.1", 00:17:24.628 "trsvcid": "55736" 00:17:24.628 }, 00:17:24.628 "auth": { 00:17:24.628 "state": "completed", 00:17:24.628 "digest": "sha384", 00:17:24.628 "dhgroup": "ffdhe2048" 00:17:24.628 } 00:17:24.628 } 00:17:24.628 ]' 00:17:24.628 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.628 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.628 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.628 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.628 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.629 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.629 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.629 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.889 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:24.889 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:25.460 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.460 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.460 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.460 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.720 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.720 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.720 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.720 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.720 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.980 00:17:25.980 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.980 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.980 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.239 { 00:17:26.239 "cntlid": 59, 00:17:26.239 "qid": 0, 00:17:26.239 "state": "enabled", 00:17:26.239 "thread": "nvmf_tgt_poll_group_000", 00:17:26.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.239 "listen_address": { 00:17:26.239 "trtype": "TCP", 00:17:26.239 "adrfam": "IPv4", 00:17:26.239 "traddr": "10.0.0.2", 00:17:26.239 "trsvcid": "4420" 00:17:26.239 }, 00:17:26.239 "peer_address": { 00:17:26.239 "trtype": "TCP", 00:17:26.239 "adrfam": "IPv4", 00:17:26.239 "traddr": "10.0.0.1", 00:17:26.239 "trsvcid": "55764" 00:17:26.239 }, 00:17:26.239 "auth": { 00:17:26.239 "state": "completed", 00:17:26.239 "digest": "sha384", 00:17:26.239 "dhgroup": "ffdhe2048" 00:17:26.239 } 00:17:26.239 } 00:17:26.239 ]' 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.239 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.499 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:26.499 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.072 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.332 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.592 00:17:27.592 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.592 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.592 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.852 { 00:17:27.852 "cntlid": 61, 00:17:27.852 "qid": 0, 00:17:27.852 "state": "enabled", 00:17:27.852 "thread": "nvmf_tgt_poll_group_000", 00:17:27.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.852 "listen_address": { 00:17:27.852 "trtype": "TCP", 00:17:27.852 "adrfam": "IPv4", 00:17:27.852 "traddr": "10.0.0.2", 00:17:27.852 "trsvcid": "4420" 00:17:27.852 }, 00:17:27.852 "peer_address": { 00:17:27.852 "trtype": "TCP", 00:17:27.852 "adrfam": "IPv4", 00:17:27.852 "traddr": "10.0.0.1", 00:17:27.852 "trsvcid": "55808" 00:17:27.852 }, 00:17:27.852 "auth": { 00:17:27.852 "state": "completed", 00:17:27.852 "digest": "sha384", 00:17:27.852 "dhgroup": "ffdhe2048" 00:17:27.852 } 00:17:27.852 } 00:17:27.852 ]' 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.852 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.112 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:28.112 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.683 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.943 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:28.943 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.943 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.943 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.944 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.204 00:17:29.204 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.204 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.204 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.464 { 00:17:29.464 "cntlid": 63, 00:17:29.464 "qid": 0, 00:17:29.464 "state": "enabled", 00:17:29.464 "thread": "nvmf_tgt_poll_group_000", 00:17:29.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.464 "listen_address": { 00:17:29.464 "trtype": "TCP", 00:17:29.464 "adrfam": "IPv4", 00:17:29.464 "traddr": "10.0.0.2", 00:17:29.464 "trsvcid": "4420" 00:17:29.464 }, 00:17:29.464 "peer_address": { 00:17:29.464 "trtype": "TCP", 00:17:29.464 "adrfam": "IPv4", 00:17:29.464 "traddr": "10.0.0.1", 00:17:29.464 "trsvcid": "55836" 00:17:29.464 }, 00:17:29.464 "auth": { 00:17:29.464 "state": "completed", 00:17:29.464 "digest": "sha384", 00:17:29.464 "dhgroup": "ffdhe2048" 00:17:29.464 } 00:17:29.464 } 00:17:29.464 ]' 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.464 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.724 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:29.724 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.294 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.555 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.815 00:17:30.815 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.815 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.815 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.075 { 00:17:31.075 "cntlid": 65, 00:17:31.075 "qid": 0, 00:17:31.075 "state": "enabled", 00:17:31.075 "thread": "nvmf_tgt_poll_group_000", 00:17:31.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.075 "listen_address": { 00:17:31.075 "trtype": "TCP", 00:17:31.075 "adrfam": "IPv4", 00:17:31.075 "traddr": "10.0.0.2", 00:17:31.075 "trsvcid": "4420" 00:17:31.075 }, 00:17:31.075 "peer_address": { 00:17:31.075 "trtype": "TCP", 00:17:31.075 "adrfam": "IPv4", 00:17:31.075 "traddr": "10.0.0.1", 00:17:31.075 "trsvcid": "55872" 00:17:31.075 }, 00:17:31.075 "auth": { 00:17:31.075 "state": "completed", 00:17:31.075 "digest": "sha384", 00:17:31.075 "dhgroup": "ffdhe3072" 00:17:31.075 } 00:17:31.075 } 00:17:31.075 ]' 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.075 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.076 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.076 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.076 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.076 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.336 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:31.336 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.905 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.426 00:17:32.426 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.426 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.426 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.686 { 00:17:32.686 "cntlid": 67, 00:17:32.686 "qid": 0, 00:17:32.686 "state": "enabled", 00:17:32.686 "thread": "nvmf_tgt_poll_group_000", 00:17:32.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.686 "listen_address": { 00:17:32.686 "trtype": "TCP", 00:17:32.686 "adrfam": "IPv4", 00:17:32.686 "traddr": "10.0.0.2", 00:17:32.686 "trsvcid": "4420" 00:17:32.686 }, 00:17:32.686 "peer_address": { 00:17:32.686 "trtype": "TCP", 00:17:32.686 "adrfam": "IPv4", 00:17:32.686 "traddr": "10.0.0.1", 00:17:32.686 "trsvcid": "55902" 00:17:32.686 }, 00:17:32.686 "auth": { 00:17:32.686 "state": "completed", 00:17:32.686 "digest": "sha384", 00:17:32.686 "dhgroup": "ffdhe3072" 00:17:32.686 } 00:17:32.686 } 00:17:32.686 ]' 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.686 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.945 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:32.946 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:33.516 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.516 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.776 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.036 00:17:34.036 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.036 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.036 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.296 { 00:17:34.296 "cntlid": 69, 00:17:34.296 "qid": 0, 00:17:34.296 "state": "enabled", 00:17:34.296 "thread": "nvmf_tgt_poll_group_000", 00:17:34.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.296 "listen_address": { 00:17:34.296 "trtype": "TCP", 00:17:34.296 "adrfam": "IPv4", 00:17:34.296 "traddr": "10.0.0.2", 00:17:34.296 "trsvcid": "4420" 00:17:34.296 }, 00:17:34.296 "peer_address": { 00:17:34.296 "trtype": "TCP", 00:17:34.296 "adrfam": "IPv4", 00:17:34.296 "traddr": "10.0.0.1", 00:17:34.296 "trsvcid": "52244" 00:17:34.296 }, 00:17:34.296 "auth": { 00:17:34.296 "state": "completed", 00:17:34.296 "digest": "sha384", 00:17:34.296 "dhgroup": "ffdhe3072" 00:17:34.296 } 00:17:34.296 } 00:17:34.296 ]' 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.296 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.297 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.297 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.297 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.297 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.297 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.556 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:34.556 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.125 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.385 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.644 00:17:35.644 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.644 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.644 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.905 { 00:17:35.905 "cntlid": 71, 00:17:35.905 "qid": 0, 00:17:35.905 "state": "enabled", 00:17:35.905 "thread": "nvmf_tgt_poll_group_000", 00:17:35.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.905 "listen_address": { 00:17:35.905 "trtype": "TCP", 00:17:35.905 "adrfam": "IPv4", 00:17:35.905 "traddr": "10.0.0.2", 00:17:35.905 "trsvcid": "4420" 00:17:35.905 }, 00:17:35.905 "peer_address": { 00:17:35.905 "trtype": "TCP", 00:17:35.905 "adrfam": "IPv4", 00:17:35.905 "traddr": "10.0.0.1", 00:17:35.905 "trsvcid": "52260" 00:17:35.905 }, 00:17:35.905 "auth": { 00:17:35.905 "state": "completed", 00:17:35.905 "digest": "sha384", 00:17:35.905 "dhgroup": "ffdhe3072" 00:17:35.905 } 00:17:35.905 } 00:17:35.905 ]' 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.905 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.166 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:36.166 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.737 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.997 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.257 00:17:37.257 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.257 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.257 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.518 { 00:17:37.518 "cntlid": 73, 00:17:37.518 "qid": 0, 00:17:37.518 "state": "enabled", 00:17:37.518 "thread": "nvmf_tgt_poll_group_000", 00:17:37.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.518 "listen_address": { 00:17:37.518 "trtype": "TCP", 00:17:37.518 "adrfam": "IPv4", 00:17:37.518 "traddr": "10.0.0.2", 00:17:37.518 "trsvcid": "4420" 00:17:37.518 }, 00:17:37.518 "peer_address": { 00:17:37.518 "trtype": "TCP", 00:17:37.518 "adrfam": "IPv4", 00:17:37.518 "traddr": "10.0.0.1", 00:17:37.518 "trsvcid": "52288" 00:17:37.518 }, 00:17:37.518 "auth": { 00:17:37.518 "state": "completed", 00:17:37.518 "digest": "sha384", 00:17:37.518 "dhgroup": "ffdhe4096" 00:17:37.518 } 00:17:37.518 } 00:17:37.518 ]' 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.518 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.518 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.518 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.777 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.777 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.777 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.777 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:37.777 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:38.346 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.607 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.607 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.868 00:17:38.868 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.868 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.868 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.128 { 00:17:39.128 "cntlid": 75, 00:17:39.128 "qid": 0, 00:17:39.128 "state": "enabled", 00:17:39.128 "thread": "nvmf_tgt_poll_group_000", 00:17:39.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.128 "listen_address": { 00:17:39.128 "trtype": "TCP", 00:17:39.128 "adrfam": "IPv4", 00:17:39.128 "traddr": "10.0.0.2", 00:17:39.128 "trsvcid": "4420" 00:17:39.128 }, 00:17:39.128 "peer_address": { 00:17:39.128 "trtype": "TCP", 00:17:39.128 "adrfam": "IPv4", 00:17:39.128 "traddr": "10.0.0.1", 00:17:39.128 "trsvcid": "52300" 00:17:39.128 }, 00:17:39.128 "auth": { 00:17:39.128 "state": "completed", 00:17:39.128 "digest": "sha384", 00:17:39.128 "dhgroup": "ffdhe4096" 00:17:39.128 } 00:17:39.128 } 00:17:39.128 ]' 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.128 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.388 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.388 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.389 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.389 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:39.389 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:39.958 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.958 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.958 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.958 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.219 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.487 00:17:40.487 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.487 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.488 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.810 { 00:17:40.810 "cntlid": 77, 00:17:40.810 "qid": 0, 00:17:40.810 "state": "enabled", 00:17:40.810 "thread": "nvmf_tgt_poll_group_000", 00:17:40.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.810 "listen_address": { 00:17:40.810 "trtype": "TCP", 00:17:40.810 "adrfam": "IPv4", 00:17:40.810 "traddr": "10.0.0.2", 00:17:40.810 "trsvcid": "4420" 00:17:40.810 }, 00:17:40.810 "peer_address": { 00:17:40.810 "trtype": "TCP", 00:17:40.810 "adrfam": "IPv4", 00:17:40.810 "traddr": "10.0.0.1", 00:17:40.810 "trsvcid": "52326" 00:17:40.810 }, 00:17:40.810 "auth": { 00:17:40.810 "state": "completed", 00:17:40.810 "digest": "sha384", 00:17:40.810 "dhgroup": "ffdhe4096" 00:17:40.810 } 00:17:40.810 } 00:17:40.810 ]' 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.810 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.086 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:41.086 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.759 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.019 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.278 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.278 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.278 { 00:17:42.278 "cntlid": 79, 00:17:42.278 "qid": 0, 00:17:42.278 "state": "enabled", 00:17:42.278 "thread": "nvmf_tgt_poll_group_000", 00:17:42.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.278 "listen_address": { 00:17:42.278 "trtype": "TCP", 00:17:42.279 "adrfam": "IPv4", 00:17:42.279 "traddr": "10.0.0.2", 00:17:42.279 "trsvcid": "4420" 00:17:42.279 }, 00:17:42.279 "peer_address": { 00:17:42.279 "trtype": "TCP", 00:17:42.279 "adrfam": "IPv4", 00:17:42.279 "traddr": "10.0.0.1", 00:17:42.279 "trsvcid": "52346" 00:17:42.279 }, 00:17:42.279 "auth": { 00:17:42.279 "state": "completed", 00:17:42.279 "digest": "sha384", 00:17:42.279 "dhgroup": "ffdhe4096" 00:17:42.279 } 00:17:42.279 } 00:17:42.279 ]' 00:17:42.279 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.539 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.799 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:42.799 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:43.369 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.370 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.629 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.889 00:17:43.889 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.889 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.889 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.149 { 00:17:44.149 "cntlid": 81, 00:17:44.149 "qid": 0, 00:17:44.149 "state": "enabled", 00:17:44.149 "thread": "nvmf_tgt_poll_group_000", 00:17:44.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.149 "listen_address": { 00:17:44.149 "trtype": "TCP", 00:17:44.149 "adrfam": "IPv4", 00:17:44.149 "traddr": "10.0.0.2", 00:17:44.149 "trsvcid": "4420" 00:17:44.149 }, 00:17:44.149 "peer_address": { 00:17:44.149 "trtype": "TCP", 00:17:44.149 "adrfam": "IPv4", 00:17:44.149 "traddr": "10.0.0.1", 00:17:44.149 "trsvcid": "57840" 00:17:44.149 }, 00:17:44.149 "auth": { 00:17:44.149 "state": "completed", 00:17:44.149 "digest": "sha384", 00:17:44.149 "dhgroup": "ffdhe6144" 00:17:44.149 } 00:17:44.149 } 00:17:44.149 ]' 00:17:44.149 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.150 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.410 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:44.410 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.980 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.263 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.522 00:17:45.522 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.523 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.523 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.782 { 00:17:45.782 "cntlid": 83, 00:17:45.782 "qid": 0, 00:17:45.782 "state": "enabled", 00:17:45.782 "thread": "nvmf_tgt_poll_group_000", 00:17:45.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.782 "listen_address": { 00:17:45.782 "trtype": "TCP", 00:17:45.782 "adrfam": "IPv4", 00:17:45.782 "traddr": "10.0.0.2", 00:17:45.782 "trsvcid": "4420" 00:17:45.782 }, 00:17:45.782 "peer_address": { 00:17:45.782 "trtype": "TCP", 00:17:45.782 "adrfam": "IPv4", 00:17:45.782 "traddr": "10.0.0.1", 00:17:45.782 "trsvcid": "57868" 00:17:45.782 }, 00:17:45.782 "auth": { 00:17:45.782 "state": "completed", 00:17:45.782 "digest": "sha384", 00:17:45.782 "dhgroup": "ffdhe6144" 00:17:45.782 } 00:17:45.782 } 00:17:45.782 ]' 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.782 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.042 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.042 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.042 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.042 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:46.042 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.983 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.244 00:17:47.244 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.244 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.244 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.505 { 00:17:47.505 "cntlid": 85, 00:17:47.505 "qid": 0, 00:17:47.505 "state": "enabled", 00:17:47.505 "thread": "nvmf_tgt_poll_group_000", 00:17:47.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.505 "listen_address": { 00:17:47.505 "trtype": "TCP", 00:17:47.505 "adrfam": "IPv4", 00:17:47.505 "traddr": "10.0.0.2", 00:17:47.505 "trsvcid": "4420" 00:17:47.505 }, 00:17:47.505 "peer_address": { 00:17:47.505 "trtype": "TCP", 00:17:47.505 "adrfam": "IPv4", 00:17:47.505 "traddr": "10.0.0.1", 00:17:47.505 "trsvcid": "57896" 00:17:47.505 }, 00:17:47.505 "auth": { 00:17:47.505 "state": "completed", 00:17:47.505 "digest": "sha384", 00:17:47.505 "dhgroup": "ffdhe6144" 00:17:47.505 } 00:17:47.505 } 00:17:47.505 ]' 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.505 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.765 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.765 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.765 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.765 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:47.765 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:48.335 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.596 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.596 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.857 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.117 { 00:17:49.117 "cntlid": 87, 00:17:49.117 "qid": 0, 00:17:49.117 "state": "enabled", 00:17:49.117 "thread": "nvmf_tgt_poll_group_000", 00:17:49.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.117 "listen_address": { 00:17:49.117 "trtype": "TCP", 00:17:49.117 "adrfam": "IPv4", 00:17:49.117 "traddr": "10.0.0.2", 00:17:49.117 "trsvcid": "4420" 00:17:49.117 }, 00:17:49.117 "peer_address": { 00:17:49.117 "trtype": "TCP", 00:17:49.117 "adrfam": "IPv4", 00:17:49.117 "traddr": "10.0.0.1", 00:17:49.117 "trsvcid": "57914" 00:17:49.117 }, 00:17:49.117 "auth": { 00:17:49.117 "state": "completed", 00:17:49.117 "digest": "sha384", 00:17:49.117 "dhgroup": "ffdhe6144" 00:17:49.117 } 00:17:49.117 } 00:17:49.117 ]' 00:17:49.117 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.378 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.639 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:49.639 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.211 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.471 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.471 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.471 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.471 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.733 00:17:50.733 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.733 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.733 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.993 { 00:17:50.993 "cntlid": 89, 00:17:50.993 "qid": 0, 00:17:50.993 "state": "enabled", 00:17:50.993 "thread": "nvmf_tgt_poll_group_000", 00:17:50.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.993 "listen_address": { 00:17:50.993 "trtype": "TCP", 00:17:50.993 "adrfam": "IPv4", 00:17:50.993 "traddr": "10.0.0.2", 00:17:50.993 "trsvcid": "4420" 00:17:50.993 }, 00:17:50.993 "peer_address": { 00:17:50.993 "trtype": "TCP", 00:17:50.993 "adrfam": "IPv4", 00:17:50.993 "traddr": "10.0.0.1", 00:17:50.993 "trsvcid": "57944" 00:17:50.993 }, 00:17:50.993 "auth": { 00:17:50.993 "state": "completed", 00:17:50.993 "digest": "sha384", 00:17:50.993 "dhgroup": "ffdhe8192" 00:17:50.993 } 00:17:50.993 } 00:17:50.993 ]' 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.993 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.254 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.254 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.254 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.254 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:51.254 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.197 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.767 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.767 { 00:17:52.767 "cntlid": 91, 00:17:52.767 "qid": 0, 00:17:52.767 "state": "enabled", 00:17:52.767 "thread": "nvmf_tgt_poll_group_000", 00:17:52.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.767 "listen_address": { 00:17:52.767 "trtype": "TCP", 00:17:52.767 "adrfam": "IPv4", 00:17:52.767 "traddr": "10.0.0.2", 00:17:52.767 "trsvcid": "4420" 00:17:52.767 }, 00:17:52.767 "peer_address": { 00:17:52.767 "trtype": "TCP", 00:17:52.767 "adrfam": "IPv4", 00:17:52.767 "traddr": "10.0.0.1", 00:17:52.767 "trsvcid": "57980" 00:17:52.767 }, 00:17:52.767 "auth": { 00:17:52.767 "state": "completed", 00:17:52.767 "digest": "sha384", 00:17:52.767 "dhgroup": "ffdhe8192" 00:17:52.767 } 00:17:52.767 } 00:17:52.767 ]' 00:17:52.767 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.027 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.287 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:53.287 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.858 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.119 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.379 00:17:54.379 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.379 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.379 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.640 { 00:17:54.640 "cntlid": 93, 00:17:54.640 "qid": 0, 00:17:54.640 "state": "enabled", 00:17:54.640 "thread": "nvmf_tgt_poll_group_000", 00:17:54.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.640 "listen_address": { 00:17:54.640 "trtype": "TCP", 00:17:54.640 "adrfam": "IPv4", 00:17:54.640 "traddr": "10.0.0.2", 00:17:54.640 "trsvcid": "4420" 00:17:54.640 }, 00:17:54.640 "peer_address": { 00:17:54.640 "trtype": "TCP", 00:17:54.640 "adrfam": "IPv4", 00:17:54.640 "traddr": "10.0.0.1", 00:17:54.640 "trsvcid": "55730" 00:17:54.640 }, 00:17:54.640 "auth": { 00:17:54.640 "state": "completed", 00:17:54.640 "digest": "sha384", 00:17:54.640 "dhgroup": "ffdhe8192" 00:17:54.640 } 00:17:54.640 } 00:17:54.640 ]' 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.640 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.901 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.901 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.901 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.901 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.901 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.902 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:54.902 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.842 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.415 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.415 { 00:17:56.415 "cntlid": 95, 00:17:56.415 "qid": 0, 00:17:56.415 "state": "enabled", 00:17:56.415 "thread": "nvmf_tgt_poll_group_000", 00:17:56.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.415 "listen_address": { 00:17:56.415 "trtype": "TCP", 00:17:56.415 "adrfam": "IPv4", 00:17:56.415 "traddr": "10.0.0.2", 00:17:56.415 "trsvcid": "4420" 00:17:56.415 }, 00:17:56.415 "peer_address": { 00:17:56.415 "trtype": "TCP", 00:17:56.415 "adrfam": "IPv4", 00:17:56.415 "traddr": "10.0.0.1", 00:17:56.415 "trsvcid": "55766" 00:17:56.415 }, 00:17:56.415 "auth": { 00:17:56.415 "state": "completed", 00:17:56.415 "digest": "sha384", 00:17:56.415 "dhgroup": "ffdhe8192" 00:17:56.415 } 00:17:56.415 } 00:17:56.415 ]' 00:17:56.415 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.676 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.676 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.676 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.676 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.676 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.676 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.676 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.937 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:56.937 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.507 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.768 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.029 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.029 { 00:17:58.029 "cntlid": 97, 00:17:58.029 "qid": 0, 00:17:58.029 "state": "enabled", 00:17:58.029 "thread": "nvmf_tgt_poll_group_000", 00:17:58.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.029 "listen_address": { 00:17:58.029 "trtype": "TCP", 00:17:58.029 "adrfam": "IPv4", 00:17:58.029 "traddr": "10.0.0.2", 00:17:58.029 "trsvcid": "4420" 00:17:58.029 }, 00:17:58.029 "peer_address": { 00:17:58.029 "trtype": "TCP", 00:17:58.029 "adrfam": "IPv4", 00:17:58.029 "traddr": "10.0.0.1", 00:17:58.029 "trsvcid": "55790" 00:17:58.029 }, 00:17:58.029 "auth": { 00:17:58.029 "state": "completed", 00:17:58.029 "digest": "sha512", 00:17:58.029 "dhgroup": "null" 00:17:58.029 } 00:17:58.029 } 00:17:58.029 ]' 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.029 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:58.291 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.232 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.491 00:17:59.491 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.491 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.491 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.751 { 00:17:59.751 "cntlid": 99, 00:17:59.751 "qid": 0, 00:17:59.751 "state": "enabled", 00:17:59.751 "thread": "nvmf_tgt_poll_group_000", 00:17:59.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.751 "listen_address": { 00:17:59.751 "trtype": "TCP", 00:17:59.751 "adrfam": "IPv4", 00:17:59.751 "traddr": "10.0.0.2", 00:17:59.751 "trsvcid": "4420" 00:17:59.751 }, 00:17:59.751 "peer_address": { 00:17:59.751 "trtype": "TCP", 00:17:59.751 "adrfam": "IPv4", 00:17:59.751 "traddr": "10.0.0.1", 00:17:59.751 "trsvcid": "55820" 00:17:59.751 }, 00:17:59.751 "auth": { 00:17:59.751 "state": "completed", 00:17:59.751 "digest": "sha512", 00:17:59.751 "dhgroup": "null" 00:17:59.751 } 00:17:59.751 } 00:17:59.751 ]' 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.751 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.012 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:00.012 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.584 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.845 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.107 00:18:01.107 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.107 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.107 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.367 { 00:18:01.367 "cntlid": 101, 00:18:01.367 "qid": 0, 00:18:01.367 "state": "enabled", 00:18:01.367 "thread": "nvmf_tgt_poll_group_000", 00:18:01.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.367 "listen_address": { 00:18:01.367 "trtype": "TCP", 00:18:01.367 "adrfam": "IPv4", 00:18:01.367 "traddr": "10.0.0.2", 00:18:01.367 "trsvcid": "4420" 00:18:01.367 }, 00:18:01.367 "peer_address": { 00:18:01.367 "trtype": "TCP", 00:18:01.367 "adrfam": "IPv4", 00:18:01.367 "traddr": "10.0.0.1", 00:18:01.367 "trsvcid": "55848" 00:18:01.367 }, 00:18:01.367 "auth": { 00:18:01.367 "state": "completed", 00:18:01.367 "digest": "sha512", 00:18:01.367 "dhgroup": "null" 00:18:01.367 } 00:18:01.367 } 00:18:01.367 ]' 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.367 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.628 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:01.628 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.200 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.460 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.720 00:18:02.720 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.720 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.720 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.981 { 00:18:02.981 "cntlid": 103, 00:18:02.981 "qid": 0, 00:18:02.981 "state": "enabled", 00:18:02.981 "thread": "nvmf_tgt_poll_group_000", 00:18:02.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.981 "listen_address": { 00:18:02.981 "trtype": "TCP", 00:18:02.981 "adrfam": "IPv4", 00:18:02.981 "traddr": "10.0.0.2", 00:18:02.981 "trsvcid": "4420" 00:18:02.981 }, 00:18:02.981 "peer_address": { 00:18:02.981 "trtype": "TCP", 00:18:02.981 "adrfam": "IPv4", 00:18:02.981 "traddr": "10.0.0.1", 00:18:02.981 "trsvcid": "55874" 00:18:02.981 }, 00:18:02.981 "auth": { 00:18:02.981 "state": "completed", 00:18:02.981 "digest": "sha512", 00:18:02.981 "dhgroup": "null" 00:18:02.981 } 00:18:02.981 } 00:18:02.981 ]' 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.981 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.982 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.242 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:03.242 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.811 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.070 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.071 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.330 00:18:04.330 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.330 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.330 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.590 { 00:18:04.590 "cntlid": 105, 00:18:04.590 "qid": 0, 00:18:04.590 "state": "enabled", 00:18:04.590 "thread": "nvmf_tgt_poll_group_000", 00:18:04.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.590 "listen_address": { 00:18:04.590 "trtype": "TCP", 00:18:04.590 "adrfam": "IPv4", 00:18:04.590 "traddr": "10.0.0.2", 00:18:04.590 "trsvcid": "4420" 00:18:04.590 }, 00:18:04.590 "peer_address": { 00:18:04.590 "trtype": "TCP", 00:18:04.590 "adrfam": "IPv4", 00:18:04.590 "traddr": "10.0.0.1", 00:18:04.590 "trsvcid": "52278" 00:18:04.590 }, 00:18:04.590 "auth": { 00:18:04.590 "state": "completed", 00:18:04.590 "digest": "sha512", 00:18:04.590 "dhgroup": "ffdhe2048" 00:18:04.590 } 00:18:04.590 } 00:18:04.590 ]' 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.590 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.590 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.590 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.590 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.590 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.849 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:04.849 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:05.419 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.419 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.419 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.419 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.681 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.681 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.681 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.681 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.681 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.941 00:18:05.941 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.941 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.941 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.201 { 00:18:06.201 "cntlid": 107, 00:18:06.201 "qid": 0, 00:18:06.201 "state": "enabled", 00:18:06.201 "thread": "nvmf_tgt_poll_group_000", 00:18:06.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.201 "listen_address": { 00:18:06.201 "trtype": "TCP", 00:18:06.201 "adrfam": "IPv4", 00:18:06.201 "traddr": "10.0.0.2", 00:18:06.201 "trsvcid": "4420" 00:18:06.201 }, 00:18:06.201 "peer_address": { 00:18:06.201 "trtype": "TCP", 00:18:06.201 "adrfam": "IPv4", 00:18:06.201 "traddr": "10.0.0.1", 00:18:06.201 "trsvcid": "52320" 00:18:06.201 }, 00:18:06.201 "auth": { 00:18:06.201 "state": "completed", 00:18:06.201 "digest": "sha512", 00:18:06.201 "dhgroup": "ffdhe2048" 00:18:06.201 } 00:18:06.201 } 00:18:06.201 ]' 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.201 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.460 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.460 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.460 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.461 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:06.461 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:07.030 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.289 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.289 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.289 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.289 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.290 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.549 00:18:07.549 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.549 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.549 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.809 { 00:18:07.809 "cntlid": 109, 00:18:07.809 "qid": 0, 00:18:07.809 "state": "enabled", 00:18:07.809 "thread": "nvmf_tgt_poll_group_000", 00:18:07.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.809 "listen_address": { 00:18:07.809 "trtype": "TCP", 00:18:07.809 "adrfam": "IPv4", 00:18:07.809 "traddr": "10.0.0.2", 00:18:07.809 "trsvcid": "4420" 00:18:07.809 }, 00:18:07.809 "peer_address": { 00:18:07.809 "trtype": "TCP", 00:18:07.809 "adrfam": "IPv4", 00:18:07.809 "traddr": "10.0.0.1", 00:18:07.809 "trsvcid": "52354" 00:18:07.809 }, 00:18:07.809 "auth": { 00:18:07.809 "state": "completed", 00:18:07.809 "digest": "sha512", 00:18:07.809 "dhgroup": "ffdhe2048" 00:18:07.809 } 00:18:07.809 } 00:18:07.809 ]' 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.809 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.070 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.070 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.070 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.070 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:08.070 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.011 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.271 00:18:09.271 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.271 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.271 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.532 { 00:18:09.532 "cntlid": 111, 00:18:09.532 "qid": 0, 00:18:09.532 "state": "enabled", 00:18:09.532 "thread": "nvmf_tgt_poll_group_000", 00:18:09.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.532 "listen_address": { 00:18:09.532 "trtype": "TCP", 00:18:09.532 "adrfam": "IPv4", 00:18:09.532 "traddr": "10.0.0.2", 00:18:09.532 "trsvcid": "4420" 00:18:09.532 }, 00:18:09.532 "peer_address": { 00:18:09.532 "trtype": "TCP", 00:18:09.532 "adrfam": "IPv4", 00:18:09.532 "traddr": "10.0.0.1", 00:18:09.532 "trsvcid": "52382" 00:18:09.532 }, 00:18:09.532 "auth": { 00:18:09.532 "state": "completed", 00:18:09.532 "digest": "sha512", 00:18:09.532 "dhgroup": "ffdhe2048" 00:18:09.532 } 00:18:09.532 } 00:18:09.532 ]' 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.532 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.792 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:09.792 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.363 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.623 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.883 00:18:10.883 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.883 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.883 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.144 { 00:18:11.144 "cntlid": 113, 00:18:11.144 "qid": 0, 00:18:11.144 "state": "enabled", 00:18:11.144 "thread": "nvmf_tgt_poll_group_000", 00:18:11.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.144 "listen_address": { 00:18:11.144 "trtype": "TCP", 00:18:11.144 "adrfam": "IPv4", 00:18:11.144 "traddr": "10.0.0.2", 00:18:11.144 "trsvcid": "4420" 00:18:11.144 }, 00:18:11.144 "peer_address": { 00:18:11.144 "trtype": "TCP", 00:18:11.144 "adrfam": "IPv4", 00:18:11.144 "traddr": "10.0.0.1", 00:18:11.144 "trsvcid": "52416" 00:18:11.144 }, 00:18:11.144 "auth": { 00:18:11.144 "state": "completed", 00:18:11.144 "digest": "sha512", 00:18:11.144 "dhgroup": "ffdhe3072" 00:18:11.144 } 00:18:11.144 } 00:18:11.144 ]' 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.144 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.405 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:11.405 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.975 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.235 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.236 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.494 00:18:12.494 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.494 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.495 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.754 { 00:18:12.754 "cntlid": 115, 00:18:12.754 "qid": 0, 00:18:12.754 "state": "enabled", 00:18:12.754 "thread": "nvmf_tgt_poll_group_000", 00:18:12.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.754 "listen_address": { 00:18:12.754 "trtype": "TCP", 00:18:12.754 "adrfam": "IPv4", 00:18:12.754 "traddr": "10.0.0.2", 00:18:12.754 "trsvcid": "4420" 00:18:12.754 }, 00:18:12.754 "peer_address": { 00:18:12.754 "trtype": "TCP", 00:18:12.754 "adrfam": "IPv4", 00:18:12.754 "traddr": "10.0.0.1", 00:18:12.754 "trsvcid": "52448" 00:18:12.754 }, 00:18:12.754 "auth": { 00:18:12.754 "state": "completed", 00:18:12.754 "digest": "sha512", 00:18:12.754 "dhgroup": "ffdhe3072" 00:18:12.754 } 00:18:12.754 } 00:18:12.754 ]' 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.754 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.015 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:13.015 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.585 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.846 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:13.846 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.846 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.846 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.846 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.847 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.108 00:18:14.108 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.108 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.108 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.368 { 00:18:14.368 "cntlid": 117, 00:18:14.368 "qid": 0, 00:18:14.368 "state": "enabled", 00:18:14.368 "thread": "nvmf_tgt_poll_group_000", 00:18:14.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.368 "listen_address": { 00:18:14.368 "trtype": "TCP", 00:18:14.368 "adrfam": "IPv4", 00:18:14.368 "traddr": "10.0.0.2", 00:18:14.368 "trsvcid": "4420" 00:18:14.368 }, 00:18:14.368 "peer_address": { 00:18:14.368 "trtype": "TCP", 00:18:14.368 "adrfam": "IPv4", 00:18:14.368 "traddr": "10.0.0.1", 00:18:14.368 "trsvcid": "59146" 00:18:14.368 }, 00:18:14.368 "auth": { 00:18:14.368 "state": "completed", 00:18:14.368 "digest": "sha512", 00:18:14.368 "dhgroup": "ffdhe3072" 00:18:14.368 } 00:18:14.368 } 00:18:14.368 ]' 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.368 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.629 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:14.629 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.200 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.460 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:15.460 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.460 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.460 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.460 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.461 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.721 00:18:15.721 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.721 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.721 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.982 { 00:18:15.982 "cntlid": 119, 00:18:15.982 "qid": 0, 00:18:15.982 "state": "enabled", 00:18:15.982 "thread": "nvmf_tgt_poll_group_000", 00:18:15.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.982 "listen_address": { 00:18:15.982 "trtype": "TCP", 00:18:15.982 "adrfam": "IPv4", 00:18:15.982 "traddr": "10.0.0.2", 00:18:15.982 "trsvcid": "4420" 00:18:15.982 }, 00:18:15.982 "peer_address": { 00:18:15.982 "trtype": "TCP", 00:18:15.982 "adrfam": "IPv4", 00:18:15.982 "traddr": "10.0.0.1", 00:18:15.982 "trsvcid": "59178" 00:18:15.982 }, 00:18:15.982 "auth": { 00:18:15.982 "state": "completed", 00:18:15.982 "digest": "sha512", 00:18:15.982 "dhgroup": "ffdhe3072" 00:18:15.982 } 00:18:15.982 } 00:18:15.982 ]' 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.982 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.243 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:16.243 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.813 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.074 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.335 00:18:17.335 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.335 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.335 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.596 { 00:18:17.596 "cntlid": 121, 00:18:17.596 "qid": 0, 00:18:17.596 "state": "enabled", 00:18:17.596 "thread": "nvmf_tgt_poll_group_000", 00:18:17.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.596 "listen_address": { 00:18:17.596 "trtype": "TCP", 00:18:17.596 "adrfam": "IPv4", 00:18:17.596 "traddr": "10.0.0.2", 00:18:17.596 "trsvcid": "4420" 00:18:17.596 }, 00:18:17.596 "peer_address": { 00:18:17.596 "trtype": "TCP", 00:18:17.596 "adrfam": "IPv4", 00:18:17.596 "traddr": "10.0.0.1", 00:18:17.596 "trsvcid": "59194" 00:18:17.596 }, 00:18:17.596 "auth": { 00:18:17.596 "state": "completed", 00:18:17.596 "digest": "sha512", 00:18:17.596 "dhgroup": "ffdhe4096" 00:18:17.596 } 00:18:17.596 } 00:18:17.596 ]' 00:18:17.596 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.596 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.857 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:17.857 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:18.429 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.689 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.689 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.690 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.690 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.950 00:18:18.950 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.950 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.950 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.210 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.210 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.210 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.210 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.210 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.210 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.210 { 00:18:19.210 "cntlid": 123, 00:18:19.210 "qid": 0, 00:18:19.210 "state": "enabled", 00:18:19.210 "thread": "nvmf_tgt_poll_group_000", 00:18:19.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.210 "listen_address": { 00:18:19.210 "trtype": "TCP", 00:18:19.210 "adrfam": "IPv4", 00:18:19.211 "traddr": "10.0.0.2", 00:18:19.211 "trsvcid": "4420" 00:18:19.211 }, 00:18:19.211 "peer_address": { 00:18:19.211 "trtype": "TCP", 00:18:19.211 "adrfam": "IPv4", 00:18:19.211 "traddr": "10.0.0.1", 00:18:19.211 "trsvcid": "59222" 00:18:19.211 }, 00:18:19.211 "auth": { 00:18:19.211 "state": "completed", 00:18:19.211 "digest": "sha512", 00:18:19.211 "dhgroup": "ffdhe4096" 00:18:19.211 } 00:18:19.211 } 00:18:19.211 ]' 00:18:19.211 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.211 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.211 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.211 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.211 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.471 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.471 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.471 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.471 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:19.471 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.412 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.672 00:18:20.672 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.672 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.672 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.932 { 00:18:20.932 "cntlid": 125, 00:18:20.932 "qid": 0, 00:18:20.932 "state": "enabled", 00:18:20.932 "thread": "nvmf_tgt_poll_group_000", 00:18:20.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.932 "listen_address": { 00:18:20.932 "trtype": "TCP", 00:18:20.932 "adrfam": "IPv4", 00:18:20.932 "traddr": "10.0.0.2", 00:18:20.932 "trsvcid": "4420" 00:18:20.932 }, 00:18:20.932 "peer_address": { 00:18:20.932 "trtype": "TCP", 00:18:20.932 "adrfam": "IPv4", 00:18:20.932 "traddr": "10.0.0.1", 00:18:20.932 "trsvcid": "59252" 00:18:20.932 }, 00:18:20.932 "auth": { 00:18:20.932 "state": "completed", 00:18:20.932 "digest": "sha512", 00:18:20.932 "dhgroup": "ffdhe4096" 00:18:20.932 } 00:18:20.932 } 00:18:20.932 ]' 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.932 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:21.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.763 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.023 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.284 00:18:22.284 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.284 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.284 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.545 { 00:18:22.545 "cntlid": 127, 00:18:22.545 "qid": 0, 00:18:22.545 "state": "enabled", 00:18:22.545 "thread": "nvmf_tgt_poll_group_000", 00:18:22.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.545 "listen_address": { 00:18:22.545 "trtype": "TCP", 00:18:22.545 "adrfam": "IPv4", 00:18:22.545 "traddr": "10.0.0.2", 00:18:22.545 "trsvcid": "4420" 00:18:22.545 }, 00:18:22.545 "peer_address": { 00:18:22.545 "trtype": "TCP", 00:18:22.545 "adrfam": "IPv4", 00:18:22.545 "traddr": "10.0.0.1", 00:18:22.545 "trsvcid": "59270" 00:18:22.545 }, 00:18:22.545 "auth": { 00:18:22.545 "state": "completed", 00:18:22.545 "digest": "sha512", 00:18:22.545 "dhgroup": "ffdhe4096" 00:18:22.545 } 00:18:22.545 } 00:18:22.545 ]' 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.545 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.545 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.545 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.805 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.805 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.805 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.805 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:22.806 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:23.376 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.635 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.635 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.205 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.205 { 00:18:24.205 "cntlid": 129, 00:18:24.205 "qid": 0, 00:18:24.205 "state": "enabled", 00:18:24.205 "thread": "nvmf_tgt_poll_group_000", 00:18:24.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.205 "listen_address": { 00:18:24.205 "trtype": "TCP", 00:18:24.205 "adrfam": "IPv4", 00:18:24.205 "traddr": "10.0.0.2", 00:18:24.205 "trsvcid": "4420" 00:18:24.205 }, 00:18:24.205 "peer_address": { 00:18:24.205 "trtype": "TCP", 00:18:24.205 "adrfam": "IPv4", 00:18:24.205 "traddr": "10.0.0.1", 00:18:24.205 "trsvcid": "39464" 00:18:24.205 }, 00:18:24.205 "auth": { 00:18:24.205 "state": "completed", 00:18:24.205 "digest": "sha512", 00:18:24.205 "dhgroup": "ffdhe6144" 00:18:24.205 } 00:18:24.205 } 00:18:24.205 ]' 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.205 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.469 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.469 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.469 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.469 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:24.469 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.407 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.667 00:18:25.667 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.667 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.667 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.927 { 00:18:25.927 "cntlid": 131, 00:18:25.927 "qid": 0, 00:18:25.927 "state": "enabled", 00:18:25.927 "thread": "nvmf_tgt_poll_group_000", 00:18:25.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.927 "listen_address": { 00:18:25.927 "trtype": "TCP", 00:18:25.927 "adrfam": "IPv4", 00:18:25.927 "traddr": "10.0.0.2", 00:18:25.927 "trsvcid": "4420" 00:18:25.927 }, 00:18:25.927 "peer_address": { 00:18:25.927 "trtype": "TCP", 00:18:25.927 "adrfam": "IPv4", 00:18:25.927 "traddr": "10.0.0.1", 00:18:25.927 "trsvcid": "39494" 00:18:25.927 }, 00:18:25.927 "auth": { 00:18:25.927 "state": "completed", 00:18:25.927 "digest": "sha512", 00:18:25.927 "dhgroup": "ffdhe6144" 00:18:25.927 } 00:18:25.927 } 00:18:25.927 ]' 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.927 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:26.187 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.126 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.127 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.386 00:18:27.386 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.386 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.386 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.647 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.647 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.647 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.647 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.647 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.647 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.647 { 00:18:27.647 "cntlid": 133, 00:18:27.647 "qid": 0, 00:18:27.647 "state": "enabled", 00:18:27.647 "thread": "nvmf_tgt_poll_group_000", 00:18:27.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.648 "listen_address": { 00:18:27.648 "trtype": "TCP", 00:18:27.648 "adrfam": "IPv4", 00:18:27.648 "traddr": "10.0.0.2", 00:18:27.648 "trsvcid": "4420" 00:18:27.648 }, 00:18:27.648 "peer_address": { 00:18:27.648 "trtype": "TCP", 00:18:27.648 "adrfam": "IPv4", 00:18:27.648 "traddr": "10.0.0.1", 00:18:27.648 "trsvcid": "39516" 00:18:27.648 }, 00:18:27.648 "auth": { 00:18:27.648 "state": "completed", 00:18:27.648 "digest": "sha512", 00:18:27.648 "dhgroup": "ffdhe6144" 00:18:27.648 } 00:18:27.648 } 00:18:27.648 ]' 00:18:27.648 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.648 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.648 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:27.909 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.850 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.111 00:18:29.111 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.111 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.111 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.374 { 00:18:29.374 "cntlid": 135, 00:18:29.374 "qid": 0, 00:18:29.374 "state": "enabled", 00:18:29.374 "thread": "nvmf_tgt_poll_group_000", 00:18:29.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.374 "listen_address": { 00:18:29.374 "trtype": "TCP", 00:18:29.374 "adrfam": "IPv4", 00:18:29.374 "traddr": "10.0.0.2", 00:18:29.374 "trsvcid": "4420" 00:18:29.374 }, 00:18:29.374 "peer_address": { 00:18:29.374 "trtype": "TCP", 00:18:29.374 "adrfam": "IPv4", 00:18:29.374 "traddr": "10.0.0.1", 00:18:29.374 "trsvcid": "39534" 00:18:29.374 }, 00:18:29.374 "auth": { 00:18:29.374 "state": "completed", 00:18:29.374 "digest": "sha512", 00:18:29.374 "dhgroup": "ffdhe6144" 00:18:29.374 } 00:18:29.374 } 00:18:29.374 ]' 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.374 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.635 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:29.635 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.205 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.466 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.467 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.467 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.039 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.039 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.299 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.299 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.299 { 00:18:31.299 "cntlid": 137, 00:18:31.299 "qid": 0, 00:18:31.299 "state": "enabled", 00:18:31.299 "thread": "nvmf_tgt_poll_group_000", 00:18:31.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.299 "listen_address": { 00:18:31.299 "trtype": "TCP", 00:18:31.299 "adrfam": "IPv4", 00:18:31.299 "traddr": "10.0.0.2", 00:18:31.299 "trsvcid": "4420" 00:18:31.299 }, 00:18:31.299 "peer_address": { 00:18:31.299 "trtype": "TCP", 00:18:31.300 "adrfam": "IPv4", 00:18:31.300 "traddr": "10.0.0.1", 00:18:31.300 "trsvcid": "39550" 00:18:31.300 }, 00:18:31.300 "auth": { 00:18:31.300 "state": "completed", 00:18:31.300 "digest": "sha512", 00:18:31.300 "dhgroup": "ffdhe8192" 00:18:31.300 } 00:18:31.300 } 00:18:31.300 ]' 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.300 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.559 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:31.559 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.129 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.390 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.962 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.962 { 00:18:32.962 "cntlid": 139, 00:18:32.962 "qid": 0, 00:18:32.962 "state": "enabled", 00:18:32.962 "thread": "nvmf_tgt_poll_group_000", 00:18:32.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.962 "listen_address": { 00:18:32.962 "trtype": "TCP", 00:18:32.962 "adrfam": "IPv4", 00:18:32.962 "traddr": "10.0.0.2", 00:18:32.962 "trsvcid": "4420" 00:18:32.962 }, 00:18:32.962 "peer_address": { 00:18:32.962 "trtype": "TCP", 00:18:32.962 "adrfam": "IPv4", 00:18:32.962 "traddr": "10.0.0.1", 00:18:32.962 "trsvcid": "39574" 00:18:32.962 }, 00:18:32.962 "auth": { 00:18:32.962 "state": "completed", 00:18:32.962 "digest": "sha512", 00:18:32.962 "dhgroup": "ffdhe8192" 00:18:32.962 } 00:18:32.962 } 00:18:32.962 ]' 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.962 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.223 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.223 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.223 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.223 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.223 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.483 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:33.483 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: --dhchap-ctrl-secret DHHC-1:02:NjE2NzQ2MDkwMTU1NDAzZTVmODNhZWY4MTkzNTgxMzE3Njk5M2VlMWNjOWVmZTZlKYojsw==: 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.055 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.316 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.576 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.836 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.836 { 00:18:34.836 "cntlid": 141, 00:18:34.836 "qid": 0, 00:18:34.836 "state": "enabled", 00:18:34.836 "thread": "nvmf_tgt_poll_group_000", 00:18:34.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.836 "listen_address": { 00:18:34.836 "trtype": "TCP", 00:18:34.836 "adrfam": "IPv4", 00:18:34.836 "traddr": "10.0.0.2", 00:18:34.836 "trsvcid": "4420" 00:18:34.836 }, 00:18:34.836 "peer_address": { 00:18:34.836 "trtype": "TCP", 00:18:34.837 "adrfam": "IPv4", 00:18:34.837 "traddr": "10.0.0.1", 00:18:34.837 "trsvcid": "52834" 00:18:34.837 }, 00:18:34.837 "auth": { 00:18:34.837 "state": "completed", 00:18:34.837 "digest": "sha512", 00:18:34.837 "dhgroup": "ffdhe8192" 00:18:34.837 } 00:18:34.837 } 00:18:34.837 ]' 00:18:34.837 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.837 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.837 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:35.098 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:01:MjUxZDJiYmM4OWE3YTMwZDM2YzY1NmY1NzY0ZjY3MDgm/jbS: 00:18:36.036 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.036 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.036 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.036 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.036 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.037 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.608 00:18:36.608 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.609 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.609 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.869 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.870 { 00:18:36.870 "cntlid": 143, 00:18:36.870 "qid": 0, 00:18:36.870 "state": "enabled", 00:18:36.870 "thread": "nvmf_tgt_poll_group_000", 00:18:36.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.870 "listen_address": { 00:18:36.870 "trtype": "TCP", 00:18:36.870 "adrfam": "IPv4", 00:18:36.870 "traddr": "10.0.0.2", 00:18:36.870 "trsvcid": "4420" 00:18:36.870 }, 00:18:36.870 "peer_address": { 00:18:36.870 "trtype": "TCP", 00:18:36.870 "adrfam": "IPv4", 00:18:36.870 "traddr": "10.0.0.1", 00:18:36.870 "trsvcid": "52860" 00:18:36.870 }, 00:18:36.870 "auth": { 00:18:36.870 "state": "completed", 00:18:36.870 "digest": "sha512", 00:18:36.870 "dhgroup": "ffdhe8192" 00:18:36.870 } 00:18:36.870 } 00:18:36.870 ]' 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.870 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.130 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:37.130 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.702 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.982 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:37.982 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.982 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.982 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.982 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.983 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.588 00:18:38.588 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.588 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.588 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.588 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.588 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.588 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.588 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.588 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.589 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.589 { 00:18:38.589 "cntlid": 145, 00:18:38.589 "qid": 0, 00:18:38.589 "state": "enabled", 00:18:38.589 "thread": "nvmf_tgt_poll_group_000", 00:18:38.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.589 "listen_address": { 00:18:38.589 "trtype": "TCP", 00:18:38.589 "adrfam": "IPv4", 00:18:38.589 "traddr": "10.0.0.2", 00:18:38.589 "trsvcid": "4420" 00:18:38.589 }, 00:18:38.589 "peer_address": { 00:18:38.589 "trtype": "TCP", 00:18:38.589 "adrfam": "IPv4", 00:18:38.589 "traddr": "10.0.0.1", 00:18:38.589 "trsvcid": "52874" 00:18:38.589 }, 00:18:38.589 "auth": { 00:18:38.589 "state": "completed", 00:18:38.589 "digest": "sha512", 00:18:38.589 "dhgroup": "ffdhe8192" 00:18:38.589 } 00:18:38.589 } 00:18:38.589 ]' 00:18:38.589 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.589 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.589 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.880 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.880 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.880 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:38.880 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZGVlNzZhMzRlNGE3OGNhMjRhMWY1ODAyYTU1ZGJjZGVkMWU5MDIzNGVhZjBhYjYw7xo8Cw==: --dhchap-ctrl-secret DHHC-1:03:MmUyYTMyMGI1N2I0OTNiYmMxMzBmM2FmODQwMzdhZDhiODczZjBkMGM5N2U2NzQ4YzM2NDQ0ZDUzOTQ0NzIyNumfLIw=: 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:39.818 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:40.078 request: 00:18:40.078 { 00:18:40.078 "name": "nvme0", 00:18:40.078 "trtype": "tcp", 00:18:40.078 "traddr": "10.0.0.2", 00:18:40.078 "adrfam": "ipv4", 00:18:40.078 "trsvcid": "4420", 00:18:40.078 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.078 "prchk_reftag": false, 00:18:40.078 "prchk_guard": false, 00:18:40.078 "hdgst": false, 00:18:40.078 "ddgst": false, 00:18:40.078 "dhchap_key": "key2", 00:18:40.078 "allow_unrecognized_csi": false, 00:18:40.078 "method": "bdev_nvme_attach_controller", 00:18:40.078 "req_id": 1 00:18:40.078 } 00:18:40.078 Got JSON-RPC error response 00:18:40.078 response: 00:18:40.078 { 00:18:40.078 "code": -5, 00:18:40.078 "message": "Input/output error" 00:18:40.078 } 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.078 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.648 request: 00:18:40.648 { 00:18:40.648 "name": "nvme0", 00:18:40.648 "trtype": "tcp", 00:18:40.648 "traddr": "10.0.0.2", 00:18:40.648 "adrfam": "ipv4", 00:18:40.648 "trsvcid": "4420", 00:18:40.648 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.648 "prchk_reftag": false, 00:18:40.648 "prchk_guard": false, 00:18:40.648 "hdgst": false, 00:18:40.648 "ddgst": false, 00:18:40.648 "dhchap_key": "key1", 00:18:40.648 "dhchap_ctrlr_key": "ckey2", 00:18:40.648 "allow_unrecognized_csi": false, 00:18:40.648 "method": "bdev_nvme_attach_controller", 00:18:40.648 "req_id": 1 00:18:40.648 } 00:18:40.648 Got JSON-RPC error response 00:18:40.648 response: 00:18:40.648 { 00:18:40.648 "code": -5, 00:18:40.648 "message": "Input/output error" 00:18:40.648 } 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.648 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.907 request: 00:18:40.907 { 00:18:40.907 "name": "nvme0", 00:18:40.907 "trtype": "tcp", 00:18:40.907 "traddr": "10.0.0.2", 00:18:40.907 "adrfam": "ipv4", 00:18:40.907 "trsvcid": "4420", 00:18:40.907 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.907 "prchk_reftag": false, 00:18:40.907 "prchk_guard": false, 00:18:40.907 "hdgst": false, 00:18:40.907 "ddgst": false, 00:18:40.907 "dhchap_key": "key1", 00:18:40.907 "dhchap_ctrlr_key": "ckey1", 00:18:40.907 "allow_unrecognized_csi": false, 00:18:40.907 "method": "bdev_nvme_attach_controller", 00:18:40.907 "req_id": 1 00:18:40.907 } 00:18:40.907 Got JSON-RPC error response 00:18:40.907 response: 00:18:40.907 { 00:18:40.907 "code": -5, 00:18:40.907 "message": "Input/output error" 00:18:40.907 } 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 362242 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 362242 ']' 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 362242 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 362242 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 362242' 00:18:41.168 killing process with pid 362242 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 362242 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 362242 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=388573 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 388573 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 388573 ']' 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:41.168 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 388573 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 388573 ']' 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.107 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 null0 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6Q9 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.xD6 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xD6 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Yc3 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.R1f ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R1f 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OPM 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.jGb ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jGb 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bhb 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.367 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.627 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.197 nvme0n1 00:18:43.197 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.197 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.197 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.457 { 00:18:43.457 "cntlid": 1, 00:18:43.457 "qid": 0, 00:18:43.457 "state": "enabled", 00:18:43.457 "thread": "nvmf_tgt_poll_group_000", 00:18:43.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.457 "listen_address": { 00:18:43.457 "trtype": "TCP", 00:18:43.457 "adrfam": "IPv4", 00:18:43.457 "traddr": "10.0.0.2", 00:18:43.457 "trsvcid": "4420" 00:18:43.457 }, 00:18:43.457 "peer_address": { 00:18:43.457 "trtype": "TCP", 00:18:43.457 "adrfam": "IPv4", 00:18:43.457 "traddr": "10.0.0.1", 00:18:43.457 "trsvcid": "52938" 00:18:43.457 }, 00:18:43.457 "auth": { 00:18:43.457 "state": "completed", 00:18:43.457 "digest": "sha512", 00:18:43.457 "dhgroup": "ffdhe8192" 00:18:43.457 } 00:18:43.457 } 00:18:43.457 ]' 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.457 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.717 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:43.717 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:44.286 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.286 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:44.546 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.546 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.807 request: 00:18:44.807 { 00:18:44.807 "name": "nvme0", 00:18:44.807 "trtype": "tcp", 00:18:44.807 "traddr": "10.0.0.2", 00:18:44.807 "adrfam": "ipv4", 00:18:44.807 "trsvcid": "4420", 00:18:44.807 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.807 "prchk_reftag": false, 00:18:44.807 "prchk_guard": false, 00:18:44.807 "hdgst": false, 00:18:44.807 "ddgst": false, 00:18:44.807 "dhchap_key": "key3", 00:18:44.807 "allow_unrecognized_csi": false, 00:18:44.807 "method": "bdev_nvme_attach_controller", 00:18:44.807 "req_id": 1 00:18:44.807 } 00:18:44.807 Got JSON-RPC error response 00:18:44.807 response: 00:18:44.807 { 00:18:44.807 "code": -5, 00:18:44.807 "message": "Input/output error" 00:18:44.807 } 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.807 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.067 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.067 request: 00:18:45.067 { 00:18:45.067 "name": "nvme0", 00:18:45.067 "trtype": "tcp", 00:18:45.067 "traddr": "10.0.0.2", 00:18:45.067 "adrfam": "ipv4", 00:18:45.067 "trsvcid": "4420", 00:18:45.067 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.068 "prchk_reftag": false, 00:18:45.068 "prchk_guard": false, 00:18:45.068 "hdgst": false, 00:18:45.068 "ddgst": false, 00:18:45.068 "dhchap_key": "key3", 00:18:45.068 "allow_unrecognized_csi": false, 00:18:45.068 "method": "bdev_nvme_attach_controller", 00:18:45.068 "req_id": 1 00:18:45.068 } 00:18:45.068 Got JSON-RPC error response 00:18:45.068 response: 00:18:45.068 { 00:18:45.068 "code": -5, 00:18:45.068 "message": "Input/output error" 00:18:45.068 } 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.331 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:45.332 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.332 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.332 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.332 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.902 request: 00:18:45.902 { 00:18:45.902 "name": "nvme0", 00:18:45.902 "trtype": "tcp", 00:18:45.902 "traddr": "10.0.0.2", 00:18:45.902 "adrfam": "ipv4", 00:18:45.902 "trsvcid": "4420", 00:18:45.902 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.902 "prchk_reftag": false, 00:18:45.902 "prchk_guard": false, 00:18:45.902 "hdgst": false, 00:18:45.902 "ddgst": false, 00:18:45.902 "dhchap_key": "key0", 00:18:45.902 "dhchap_ctrlr_key": "key1", 00:18:45.902 "allow_unrecognized_csi": false, 00:18:45.902 "method": "bdev_nvme_attach_controller", 00:18:45.902 "req_id": 1 00:18:45.902 } 00:18:45.902 Got JSON-RPC error response 00:18:45.902 response: 00:18:45.902 { 00:18:45.902 "code": -5, 00:18:45.902 "message": "Input/output error" 00:18:45.902 } 00:18:45.902 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:45.902 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.902 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.902 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.902 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:45.902 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:45.903 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:45.903 nvme0n1 00:18:45.903 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:45.903 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:45.903 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.164 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.164 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.164 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.424 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.996 nvme0n1 00:18:46.996 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:46.996 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:46.996 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:47.256 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.517 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.517 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:47.517 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: --dhchap-ctrl-secret DHHC-1:03:OGI1YzgyNTk1NDZiZjgzMjE3ZjVkNWRmMDE1MTdjMWNkYmQxYWVhNzVhZjc0Nzg1YTFlOWRhOTY2Y2YxOGYyNaNtwYg=: 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.087 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:48.348 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:48.609 request: 00:18:48.609 { 00:18:48.609 "name": "nvme0", 00:18:48.609 "trtype": "tcp", 00:18:48.609 "traddr": "10.0.0.2", 00:18:48.609 "adrfam": "ipv4", 00:18:48.609 "trsvcid": "4420", 00:18:48.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.609 "prchk_reftag": false, 00:18:48.609 "prchk_guard": false, 00:18:48.609 "hdgst": false, 00:18:48.609 "ddgst": false, 00:18:48.609 "dhchap_key": "key1", 00:18:48.609 "allow_unrecognized_csi": false, 00:18:48.609 "method": "bdev_nvme_attach_controller", 00:18:48.609 "req_id": 1 00:18:48.609 } 00:18:48.609 Got JSON-RPC error response 00:18:48.609 response: 00:18:48.609 { 00:18:48.609 "code": -5, 00:18:48.609 "message": "Input/output error" 00:18:48.609 } 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.870 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.441 nvme0n1 00:18:49.441 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:49.441 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:49.441 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.702 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.702 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.702 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:49.962 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:50.223 nvme0n1 00:18:50.223 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:50.223 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:50.223 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.223 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.223 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.223 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: '' 2s 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: ]] 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2E2NTI0YjhlNzcxNzhjMzA0ZjU0NmU0MmE4YTY5Y2IDvHcv: 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:50.484 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.396 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: 2s 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: ]] 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGY2NzFhZmU4YjFlMDNkYTg2NTA4MGM5NWFhOTU1YTkxZjYwMDg1YTkwODUyMjcwfDb6qg==: 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:52.655 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:54.572 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.572 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.513 nvme0n1 00:18:55.513 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.513 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.513 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.513 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.513 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.513 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.774 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:55.774 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:55.774 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:56.034 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:56.294 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.295 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:56.295 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.295 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:56.295 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.295 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.295 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.889 request: 00:18:56.889 { 00:18:56.889 "name": "nvme0", 00:18:56.889 "dhchap_key": "key1", 00:18:56.889 "dhchap_ctrlr_key": "key3", 00:18:56.889 "method": "bdev_nvme_set_keys", 00:18:56.889 "req_id": 1 00:18:56.889 } 00:18:56.889 Got JSON-RPC error response 00:18:56.889 response: 00:18:56.889 { 00:18:56.889 "code": -13, 00:18:56.889 "message": "Permission denied" 00:18:56.889 } 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:56.889 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.274 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.845 nvme0n1 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:58.845 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:59.415 request: 00:18:59.415 { 00:18:59.415 "name": "nvme0", 00:18:59.415 "dhchap_key": "key2", 00:18:59.415 "dhchap_ctrlr_key": "key0", 00:18:59.415 "method": "bdev_nvme_set_keys", 00:18:59.415 "req_id": 1 00:18:59.415 } 00:18:59.415 Got JSON-RPC error response 00:18:59.415 response: 00:18:59.415 { 00:18:59.415 "code": -13, 00:18:59.415 "message": "Permission denied" 00:18:59.415 } 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:59.415 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.675 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:59.675 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:00.617 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:00.617 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:00.617 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 362494 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 362494 ']' 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 362494 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 362494 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 362494' 00:19:00.878 killing process with pid 362494 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 362494 00:19:00.878 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 362494 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.139 rmmod nvme_tcp 00:19:01.139 rmmod nvme_fabrics 00:19:01.139 rmmod nvme_keyring 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.139 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 388573 ']' 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 388573 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 388573 ']' 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 388573 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 388573 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 388573' 00:19:01.140 killing process with pid 388573 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 388573 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 388573 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.140 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6Q9 /tmp/spdk.key-sha256.Yc3 /tmp/spdk.key-sha384.OPM /tmp/spdk.key-sha512.bhb /tmp/spdk.key-sha512.xD6 /tmp/spdk.key-sha384.R1f /tmp/spdk.key-sha256.jGb '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:03.687 00:19:03.687 real 2m37.018s 00:19:03.687 user 5m53.363s 00:19:03.687 sys 0m24.748s 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.687 ************************************ 00:19:03.687 END TEST nvmf_auth_target 00:19:03.687 ************************************ 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.687 ************************************ 00:19:03.687 START TEST nvmf_bdevio_no_huge 00:19:03.687 ************************************ 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:03.687 * Looking for test storage... 00:19:03.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.687 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.687 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:03.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.688 --rc genhtml_branch_coverage=1 00:19:03.688 --rc genhtml_function_coverage=1 00:19:03.688 --rc genhtml_legend=1 00:19:03.688 --rc geninfo_all_blocks=1 00:19:03.688 --rc geninfo_unexecuted_blocks=1 00:19:03.688 00:19:03.688 ' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:03.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.688 --rc genhtml_branch_coverage=1 00:19:03.688 --rc genhtml_function_coverage=1 00:19:03.688 --rc genhtml_legend=1 00:19:03.688 --rc geninfo_all_blocks=1 00:19:03.688 --rc geninfo_unexecuted_blocks=1 00:19:03.688 00:19:03.688 ' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:03.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.688 --rc genhtml_branch_coverage=1 00:19:03.688 --rc genhtml_function_coverage=1 00:19:03.688 --rc genhtml_legend=1 00:19:03.688 --rc geninfo_all_blocks=1 00:19:03.688 --rc geninfo_unexecuted_blocks=1 00:19:03.688 00:19:03.688 ' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:03.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.688 --rc genhtml_branch_coverage=1 00:19:03.688 --rc genhtml_function_coverage=1 00:19:03.688 --rc genhtml_legend=1 00:19:03.688 --rc geninfo_all_blocks=1 00:19:03.688 --rc geninfo_unexecuted_blocks=1 00:19:03.688 00:19:03.688 ' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.688 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:11.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:11.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:11.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:11.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:19:11.833 00:19:11.833 --- 10.0.0.2 ping statistics --- 00:19:11.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.833 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:19:11.833 00:19:11.833 --- 10.0.0.1 ping statistics --- 00:19:11.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.833 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=396769 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 396769 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 396769 ']' 00:19:11.833 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.834 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.834 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.834 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.834 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.834 [2024-11-15 10:59:30.620265] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:11.834 [2024-11-15 10:59:30.620337] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:11.834 [2024-11-15 10:59:30.727054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.834 [2024-11-15 10:59:30.787579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.834 [2024-11-15 10:59:30.787626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.834 [2024-11-15 10:59:30.787635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.834 [2024-11-15 10:59:30.787642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.834 [2024-11-15 10:59:30.787649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.834 [2024-11-15 10:59:30.789096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:11.834 [2024-11-15 10:59:30.789229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:11.834 [2024-11-15 10:59:30.789393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.834 [2024-11-15 10:59:30.789393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:12.094 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:12.094 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 [2024-11-15 10:59:31.497323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 Malloc0 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 [2024-11-15 10:59:31.551334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:12.095 { 00:19:12.095 "params": { 00:19:12.095 "name": "Nvme$subsystem", 00:19:12.095 "trtype": "$TEST_TRANSPORT", 00:19:12.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.095 "adrfam": "ipv4", 00:19:12.095 "trsvcid": "$NVMF_PORT", 00:19:12.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.095 "hdgst": ${hdgst:-false}, 00:19:12.095 "ddgst": ${ddgst:-false} 00:19:12.095 }, 00:19:12.095 "method": "bdev_nvme_attach_controller" 00:19:12.095 } 00:19:12.095 EOF 00:19:12.095 )") 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:12.095 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:12.095 "params": { 00:19:12.095 "name": "Nvme1", 00:19:12.095 "trtype": "tcp", 00:19:12.095 "traddr": "10.0.0.2", 00:19:12.095 "adrfam": "ipv4", 00:19:12.095 "trsvcid": "4420", 00:19:12.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.095 "hdgst": false, 00:19:12.095 "ddgst": false 00:19:12.095 }, 00:19:12.095 "method": "bdev_nvme_attach_controller" 00:19:12.095 }' 00:19:12.095 [2024-11-15 10:59:31.610442] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:12.095 [2024-11-15 10:59:31.610520] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid397060 ] 00:19:12.356 [2024-11-15 10:59:31.708319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:12.357 [2024-11-15 10:59:31.768624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.357 [2024-11-15 10:59:31.768717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.357 [2024-11-15 10:59:31.768719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.617 I/O targets: 00:19:12.617 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:12.617 00:19:12.617 00:19:12.617 CUnit - A unit testing framework for C - Version 2.1-3 00:19:12.617 http://cunit.sourceforge.net/ 00:19:12.617 00:19:12.617 00:19:12.617 Suite: bdevio tests on: Nvme1n1 00:19:12.879 Test: blockdev write read block ...passed 00:19:12.879 Test: blockdev write zeroes read block ...passed 00:19:12.879 Test: blockdev write zeroes read no split ...passed 00:19:12.879 Test: blockdev write zeroes read split ...passed 00:19:12.879 Test: blockdev write zeroes read split partial ...passed 00:19:12.879 Test: blockdev reset ...[2024-11-15 10:59:32.298259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:12.879 [2024-11-15 10:59:32.298355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431800 (9): Bad file descriptor 00:19:12.879 [2024-11-15 10:59:32.401421] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:12.879 passed 00:19:13.141 Test: blockdev write read 8 blocks ...passed 00:19:13.141 Test: blockdev write read size > 128k ...passed 00:19:13.141 Test: blockdev write read invalid size ...passed 00:19:13.141 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.141 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.141 Test: blockdev write read max offset ...passed 00:19:13.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:13.141 Test: blockdev writev readv 8 blocks ...passed 00:19:13.141 Test: blockdev writev readv 30 x 1block ...passed 00:19:13.141 Test: blockdev writev readv block ...passed 00:19:13.404 Test: blockdev writev readv size > 128k ...passed 00:19:13.404 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:13.404 Test: blockdev comparev and writev ...[2024-11-15 10:59:32.711459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.711507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.711525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.711534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.712082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.712103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.712119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.712129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.712710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.712722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.712736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.712746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.713274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.713287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.713301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.404 [2024-11-15 10:59:32.713310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:13.404 passed 00:19:13.404 Test: blockdev nvme passthru rw ...passed 00:19:13.404 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:59:32.799591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.404 [2024-11-15 10:59:32.799655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.800008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.404 [2024-11-15 10:59:32.800029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.800410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.404 [2024-11-15 10:59:32.800422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:13.404 [2024-11-15 10:59:32.800818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.404 [2024-11-15 10:59:32.800831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:13.404 passed 00:19:13.404 Test: blockdev nvme admin passthru ...passed 00:19:13.404 Test: blockdev copy ...passed 00:19:13.404 00:19:13.404 Run Summary: Type Total Ran Passed Failed Inactive 00:19:13.404 suites 1 1 n/a 0 0 00:19:13.404 tests 23 23 23 0 0 00:19:13.404 asserts 152 152 152 0 n/a 00:19:13.404 00:19:13.404 Elapsed time = 1.503 seconds 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.667 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:13.667 rmmod nvme_tcp 00:19:13.667 rmmod nvme_fabrics 00:19:13.928 rmmod nvme_keyring 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 396769 ']' 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 396769 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 396769 ']' 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 396769 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 396769 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:13.928 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 396769' 00:19:13.929 killing process with pid 396769 00:19:13.929 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 396769 00:19:13.929 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 396769 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.190 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.102 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:16.102 00:19:16.102 real 0m12.798s 00:19:16.102 user 0m15.996s 00:19:16.102 sys 0m6.626s 00:19:16.102 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:16.102 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.102 ************************************ 00:19:16.102 END TEST nvmf_bdevio_no_huge 00:19:16.102 ************************************ 00:19:16.362 10:59:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.363 ************************************ 00:19:16.363 START TEST nvmf_tls 00:19:16.363 ************************************ 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.363 * Looking for test storage... 00:19:16.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.363 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.625 --rc genhtml_branch_coverage=1 00:19:16.625 --rc genhtml_function_coverage=1 00:19:16.625 --rc genhtml_legend=1 00:19:16.625 --rc geninfo_all_blocks=1 00:19:16.625 --rc geninfo_unexecuted_blocks=1 00:19:16.625 00:19:16.625 ' 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.625 --rc genhtml_branch_coverage=1 00:19:16.625 --rc genhtml_function_coverage=1 00:19:16.625 --rc genhtml_legend=1 00:19:16.625 --rc geninfo_all_blocks=1 00:19:16.625 --rc geninfo_unexecuted_blocks=1 00:19:16.625 00:19:16.625 ' 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.625 --rc genhtml_branch_coverage=1 00:19:16.625 --rc genhtml_function_coverage=1 00:19:16.625 --rc genhtml_legend=1 00:19:16.625 --rc geninfo_all_blocks=1 00:19:16.625 --rc geninfo_unexecuted_blocks=1 00:19:16.625 00:19:16.625 ' 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.625 --rc genhtml_branch_coverage=1 00:19:16.625 --rc genhtml_function_coverage=1 00:19:16.625 --rc genhtml_legend=1 00:19:16.625 --rc geninfo_all_blocks=1 00:19:16.625 --rc geninfo_unexecuted_blocks=1 00:19:16.625 00:19:16.625 ' 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.625 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.626 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.772 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:24.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:24.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:24.773 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:24.773 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:19:24.773 00:19:24.773 --- 10.0.0.2 ping statistics --- 00:19:24.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.773 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:19:24.773 00:19:24.773 --- 10.0.0.1 ping statistics --- 00:19:24.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.773 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=401587 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 401587 00:19:24.773 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:24.774 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 401587 ']' 00:19:24.774 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.774 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:24.774 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.774 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:24.774 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.774 [2024-11-15 10:59:43.546252] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:24.774 [2024-11-15 10:59:43.546322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.774 [2024-11-15 10:59:43.649347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.774 [2024-11-15 10:59:43.699800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.774 [2024-11-15 10:59:43.699848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.774 [2024-11-15 10:59:43.699857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.774 [2024-11-15 10:59:43.699864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.774 [2024-11-15 10:59:43.699870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.774 [2024-11-15 10:59:43.700638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:25.035 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:25.297 true 00:19:25.297 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.297 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:25.297 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:25.297 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:25.297 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:25.558 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.558 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:25.820 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:25.820 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:25.820 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:25.820 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.820 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:26.081 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:26.081 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:26.081 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.081 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:26.342 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:26.342 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:26.342 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:26.605 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.605 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:26.605 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:26.605 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:26.605 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:26.867 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.867 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.JygbgDzP2B 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.c1vVv9EpDv 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.JygbgDzP2B 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.c1vVv9EpDv 00:19:27.129 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:27.391 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:27.653 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.JygbgDzP2B 00:19:27.653 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JygbgDzP2B 00:19:27.653 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.653 [2024-11-15 10:59:47.151025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.653 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.913 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.174 [2024-11-15 10:59:47.483829] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.174 [2024-11-15 10:59:47.484040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.174 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.174 malloc0 00:19:28.174 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:28.435 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JygbgDzP2B 00:19:28.696 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.696 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.JygbgDzP2B 00:19:40.930 Initializing NVMe Controllers 00:19:40.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.930 Initialization complete. Launching workers. 00:19:40.930 ======================================================== 00:19:40.930 Latency(us) 00:19:40.930 Device Information : IOPS MiB/s Average min max 00:19:40.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18533.68 72.40 3453.39 1175.69 4597.65 00:19:40.930 ======================================================== 00:19:40.930 Total : 18533.68 72.40 3453.39 1175.69 4597.65 00:19:40.930 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JygbgDzP2B 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JygbgDzP2B 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=404471 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 404471 /var/tmp/bdevperf.sock 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 404471 ']' 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.930 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.930 [2024-11-15 10:59:58.331754] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:40.930 [2024-11-15 10:59:58.331809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404471 ] 00:19:40.930 [2024-11-15 10:59:58.419163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.930 [2024-11-15 10:59:58.454292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.930 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.930 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:40.930 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JygbgDzP2B 00:19:40.930 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.930 [2024-11-15 10:59:59.430877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.930 TLSTESTn1 00:19:40.930 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.930 Running I/O for 10 seconds... 00:19:42.132 5515.00 IOPS, 21.54 MiB/s [2024-11-15T10:00:03.043Z] 5078.00 IOPS, 19.84 MiB/s [2024-11-15T10:00:03.614Z] 4964.33 IOPS, 19.39 MiB/s [2024-11-15T10:00:04.995Z] 5208.50 IOPS, 20.35 MiB/s [2024-11-15T10:00:05.936Z] 5354.00 IOPS, 20.91 MiB/s [2024-11-15T10:00:06.880Z] 5263.50 IOPS, 20.56 MiB/s [2024-11-15T10:00:07.825Z] 5312.43 IOPS, 20.75 MiB/s [2024-11-15T10:00:08.766Z] 5391.50 IOPS, 21.06 MiB/s [2024-11-15T10:00:09.706Z] 5397.00 IOPS, 21.08 MiB/s [2024-11-15T10:00:09.706Z] 5434.40 IOPS, 21.23 MiB/s 00:19:50.179 Latency(us) 00:19:50.179 [2024-11-15T10:00:09.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.179 Verification LBA range: start 0x0 length 0x2000 00:19:50.179 TLSTESTn1 : 10.02 5438.45 21.24 0.00 0.00 23501.51 5870.93 66409.81 00:19:50.179 [2024-11-15T10:00:09.706Z] =================================================================================================================== 00:19:50.179 [2024-11-15T10:00:09.706Z] Total : 5438.45 21.24 0.00 0.00 23501.51 5870.93 66409.81 00:19:50.179 { 00:19:50.179 "results": [ 00:19:50.179 { 00:19:50.179 "job": "TLSTESTn1", 00:19:50.179 "core_mask": "0x4", 00:19:50.179 "workload": "verify", 00:19:50.179 "status": "finished", 00:19:50.179 "verify_range": { 00:19:50.179 "start": 0, 00:19:50.179 "length": 8192 00:19:50.179 }, 00:19:50.179 "queue_depth": 128, 00:19:50.179 "io_size": 4096, 00:19:50.179 "runtime": 10.015914, 00:19:50.179 "iops": 5438.445258216075, 00:19:50.179 "mibps": 21.243926789906542, 00:19:50.179 "io_failed": 0, 00:19:50.179 "io_timeout": 0, 00:19:50.179 "avg_latency_us": 23501.514925740303, 00:19:50.179 "min_latency_us": 5870.933333333333, 00:19:50.179 "max_latency_us": 66409.81333333334 00:19:50.179 } 00:19:50.179 ], 00:19:50.179 "core_count": 1 00:19:50.179 } 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 404471 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 404471 ']' 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 404471 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.179 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 404471 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 404471' 00:19:50.440 killing process with pid 404471 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 404471 00:19:50.440 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.440 00:19:50.440 Latency(us) 00:19:50.440 [2024-11-15T10:00:09.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.440 [2024-11-15T10:00:09.967Z] =================================================================================================================== 00:19:50.440 [2024-11-15T10:00:09.967Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 404471 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c1vVv9EpDv 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c1vVv9EpDv 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c1vVv9EpDv 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.c1vVv9EpDv 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=407178 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 407178 /var/tmp/bdevperf.sock 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 407178 ']' 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.440 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.440 [2024-11-15 11:00:09.901181] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:50.440 [2024-11-15 11:00:09.901236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407178 ] 00:19:50.700 [2024-11-15 11:00:09.983254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.700 [2024-11-15 11:00:10.013021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.270 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.270 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:51.270 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.c1vVv9EpDv 00:19:51.530 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.530 [2024-11-15 11:00:10.996674] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.530 [2024-11-15 11:00:11.005319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.530 [2024-11-15 11:00:11.005900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214abb0 (107): Transport endpoint is not connected 00:19:51.530 [2024-11-15 11:00:11.006896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214abb0 (9): Bad file descriptor 00:19:51.530 [2024-11-15 11:00:11.007897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:51.530 [2024-11-15 11:00:11.007904] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.530 [2024-11-15 11:00:11.007910] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:51.530 [2024-11-15 11:00:11.007917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:51.530 request: 00:19:51.530 { 00:19:51.530 "name": "TLSTEST", 00:19:51.530 "trtype": "tcp", 00:19:51.530 "traddr": "10.0.0.2", 00:19:51.530 "adrfam": "ipv4", 00:19:51.530 "trsvcid": "4420", 00:19:51.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.530 "prchk_reftag": false, 00:19:51.530 "prchk_guard": false, 00:19:51.530 "hdgst": false, 00:19:51.530 "ddgst": false, 00:19:51.530 "psk": "key0", 00:19:51.530 "allow_unrecognized_csi": false, 00:19:51.530 "method": "bdev_nvme_attach_controller", 00:19:51.530 "req_id": 1 00:19:51.530 } 00:19:51.530 Got JSON-RPC error response 00:19:51.530 response: 00:19:51.530 { 00:19:51.530 "code": -5, 00:19:51.530 "message": "Input/output error" 00:19:51.530 } 00:19:51.530 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 407178 00:19:51.530 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 407178 ']' 00:19:51.530 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 407178 00:19:51.530 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.530 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.530 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 407178 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 407178' 00:19:51.789 killing process with pid 407178 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 407178 00:19:51.789 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.789 00:19:51.789 Latency(us) 00:19:51.789 [2024-11-15T10:00:11.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.789 [2024-11-15T10:00:11.316Z] =================================================================================================================== 00:19:51.789 [2024-11-15T10:00:11.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 407178 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JygbgDzP2B 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JygbgDzP2B 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:51.789 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JygbgDzP2B 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JygbgDzP2B 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=407510 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 407510 /var/tmp/bdevperf.sock 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 407510 ']' 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.790 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.790 [2024-11-15 11:00:11.253911] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:51.790 [2024-11-15 11:00:11.253974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407510 ] 00:19:52.049 [2024-11-15 11:00:11.338222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.049 [2024-11-15 11:00:11.367127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.619 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.619 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.619 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JygbgDzP2B 00:19:52.878 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:52.878 [2024-11-15 11:00:12.370938] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.878 [2024-11-15 11:00:12.378588] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:52.878 [2024-11-15 11:00:12.378611] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:52.878 [2024-11-15 11:00:12.378635] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.879 [2024-11-15 11:00:12.379180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x984bb0 (107): Transport endpoint is not connected 00:19:52.879 [2024-11-15 11:00:12.380176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x984bb0 (9): Bad file descriptor 00:19:52.879 [2024-11-15 11:00:12.381177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:52.879 [2024-11-15 11:00:12.381184] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:52.879 [2024-11-15 11:00:12.381190] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:52.879 [2024-11-15 11:00:12.381197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:52.879 request: 00:19:52.879 { 00:19:52.879 "name": "TLSTEST", 00:19:52.879 "trtype": "tcp", 00:19:52.879 "traddr": "10.0.0.2", 00:19:52.879 "adrfam": "ipv4", 00:19:52.879 "trsvcid": "4420", 00:19:52.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.879 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:52.879 "prchk_reftag": false, 00:19:52.879 "prchk_guard": false, 00:19:52.879 "hdgst": false, 00:19:52.879 "ddgst": false, 00:19:52.879 "psk": "key0", 00:19:52.879 "allow_unrecognized_csi": false, 00:19:52.879 "method": "bdev_nvme_attach_controller", 00:19:52.879 "req_id": 1 00:19:52.879 } 00:19:52.879 Got JSON-RPC error response 00:19:52.879 response: 00:19:52.879 { 00:19:52.879 "code": -5, 00:19:52.879 "message": "Input/output error" 00:19:52.879 } 00:19:52.879 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 407510 00:19:52.879 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 407510 ']' 00:19:52.879 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 407510 00:19:52.879 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:52.879 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.879 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 407510 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 407510' 00:19:53.138 killing process with pid 407510 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 407510 00:19:53.138 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.138 00:19:53.138 Latency(us) 00:19:53.138 [2024-11-15T10:00:12.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.138 [2024-11-15T10:00:12.665Z] =================================================================================================================== 00:19:53.138 [2024-11-15T10:00:12.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 407510 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JygbgDzP2B 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JygbgDzP2B 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JygbgDzP2B 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JygbgDzP2B 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=407749 00:19:53.138 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 407749 /var/tmp/bdevperf.sock 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 407749 ']' 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.139 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.139 [2024-11-15 11:00:12.609904] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:53.139 [2024-11-15 11:00:12.609958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407749 ] 00:19:53.398 [2024-11-15 11:00:12.692149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.398 [2024-11-15 11:00:12.720983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.967 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.967 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:53.967 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JygbgDzP2B 00:19:54.227 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.227 [2024-11-15 11:00:13.736892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.227 [2024-11-15 11:00:13.741893] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:54.227 [2024-11-15 11:00:13.741916] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:54.227 [2024-11-15 11:00:13.741935] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:54.227 [2024-11-15 11:00:13.742073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecdbb0 (107): Transport endpoint is not connected 00:19:54.227 [2024-11-15 11:00:13.743060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecdbb0 (9): Bad file descriptor 00:19:54.227 [2024-11-15 11:00:13.744062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:54.227 [2024-11-15 11:00:13.744068] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:54.227 [2024-11-15 11:00:13.744074] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:54.227 [2024-11-15 11:00:13.744081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:54.227 request: 00:19:54.227 { 00:19:54.227 "name": "TLSTEST", 00:19:54.227 "trtype": "tcp", 00:19:54.227 "traddr": "10.0.0.2", 00:19:54.228 "adrfam": "ipv4", 00:19:54.228 "trsvcid": "4420", 00:19:54.228 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.228 "prchk_reftag": false, 00:19:54.228 "prchk_guard": false, 00:19:54.228 "hdgst": false, 00:19:54.228 "ddgst": false, 00:19:54.228 "psk": "key0", 00:19:54.228 "allow_unrecognized_csi": false, 00:19:54.228 "method": "bdev_nvme_attach_controller", 00:19:54.228 "req_id": 1 00:19:54.228 } 00:19:54.228 Got JSON-RPC error response 00:19:54.228 response: 00:19:54.228 { 00:19:54.228 "code": -5, 00:19:54.228 "message": "Input/output error" 00:19:54.228 } 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 407749 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 407749 ']' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 407749 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 407749 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 407749' 00:19:54.487 killing process with pid 407749 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 407749 00:19:54.487 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.487 00:19:54.487 Latency(us) 00:19:54.487 [2024-11-15T10:00:14.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.487 [2024-11-15T10:00:14.014Z] =================================================================================================================== 00:19:54.487 [2024-11-15T10:00:14.014Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 407749 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=408089 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 408089 /var/tmp/bdevperf.sock 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 408089 ']' 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.487 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.487 [2024-11-15 11:00:13.987937] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:54.487 [2024-11-15 11:00:13.987992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408089 ] 00:19:54.747 [2024-11-15 11:00:14.072344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.747 [2024-11-15 11:00:14.099921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.317 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.317 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:55.317 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:55.577 [2024-11-15 11:00:14.943098] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:55.577 [2024-11-15 11:00:14.943123] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:55.577 request: 00:19:55.577 { 00:19:55.577 "name": "key0", 00:19:55.577 "path": "", 00:19:55.578 "method": "keyring_file_add_key", 00:19:55.578 "req_id": 1 00:19:55.578 } 00:19:55.578 Got JSON-RPC error response 00:19:55.578 response: 00:19:55.578 { 00:19:55.578 "code": -1, 00:19:55.578 "message": "Operation not permitted" 00:19:55.578 } 00:19:55.578 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.838 [2024-11-15 11:00:15.127644] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.838 [2024-11-15 11:00:15.127665] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:55.838 request: 00:19:55.838 { 00:19:55.838 "name": "TLSTEST", 00:19:55.838 "trtype": "tcp", 00:19:55.838 "traddr": "10.0.0.2", 00:19:55.838 "adrfam": "ipv4", 00:19:55.838 "trsvcid": "4420", 00:19:55.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.838 "prchk_reftag": false, 00:19:55.838 "prchk_guard": false, 00:19:55.838 "hdgst": false, 00:19:55.838 "ddgst": false, 00:19:55.838 "psk": "key0", 00:19:55.838 "allow_unrecognized_csi": false, 00:19:55.838 "method": "bdev_nvme_attach_controller", 00:19:55.838 "req_id": 1 00:19:55.838 } 00:19:55.838 Got JSON-RPC error response 00:19:55.838 response: 00:19:55.838 { 00:19:55.838 "code": -126, 00:19:55.838 "message": "Required key not available" 00:19:55.838 } 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 408089 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 408089 ']' 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 408089 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 408089 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 408089' 00:19:55.838 killing process with pid 408089 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 408089 00:19:55.838 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.838 00:19:55.838 Latency(us) 00:19:55.838 [2024-11-15T10:00:15.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.838 [2024-11-15T10:00:15.365Z] =================================================================================================================== 00:19:55.838 [2024-11-15T10:00:15.365Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 408089 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 401587 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 401587 ']' 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 401587 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.838 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 401587 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 401587' 00:19:56.110 killing process with pid 401587 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 401587 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 401587 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:56.110 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.0uhU7HRmUL 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.0uhU7HRmUL 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=408444 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 408444 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 408444 ']' 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.111 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.111 [2024-11-15 11:00:15.623854] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:56.111 [2024-11-15 11:00:15.623913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.376 [2024-11-15 11:00:15.715986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.376 [2024-11-15 11:00:15.748109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.376 [2024-11-15 11:00:15.748143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.376 [2024-11-15 11:00:15.748148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.376 [2024-11-15 11:00:15.748153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.376 [2024-11-15 11:00:15.748160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.376 [2024-11-15 11:00:15.748683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.945 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:56.945 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:56.945 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.945 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.945 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.946 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.946 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.0uhU7HRmUL 00:19:56.946 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0uhU7HRmUL 00:19:56.946 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.207 [2024-11-15 11:00:16.606986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.207 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.469 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.469 [2024-11-15 11:00:16.967873] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.469 [2024-11-15 11:00:16.968077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.786 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.786 malloc0 00:19:57.786 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:58.083 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:19:58.083 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0uhU7HRmUL 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0uhU7HRmUL 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=408816 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 408816 /var/tmp/bdevperf.sock 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 408816 ']' 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.377 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.377 [2024-11-15 11:00:17.759251] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:19:58.377 [2024-11-15 11:00:17.759305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408816 ] 00:19:58.377 [2024-11-15 11:00:17.841450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.377 [2024-11-15 11:00:17.870484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.332 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.332 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:59.332 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:19:59.332 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.593 [2024-11-15 11:00:18.878233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.593 TLSTESTn1 00:19:59.593 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:59.593 Running I/O for 10 seconds... 00:20:01.545 6012.00 IOPS, 23.48 MiB/s [2024-11-15T10:00:22.455Z] 5864.00 IOPS, 22.91 MiB/s [2024-11-15T10:00:23.397Z] 6070.33 IOPS, 23.71 MiB/s [2024-11-15T10:00:24.338Z] 6056.50 IOPS, 23.66 MiB/s [2024-11-15T10:00:25.282Z] 6050.40 IOPS, 23.63 MiB/s [2024-11-15T10:00:26.224Z] 6009.83 IOPS, 23.48 MiB/s [2024-11-15T10:00:27.167Z] 5977.43 IOPS, 23.35 MiB/s [2024-11-15T10:00:28.109Z] 6037.38 IOPS, 23.58 MiB/s [2024-11-15T10:00:29.496Z] 6101.67 IOPS, 23.83 MiB/s [2024-11-15T10:00:29.496Z] 6129.00 IOPS, 23.94 MiB/s 00:20:09.969 Latency(us) 00:20:09.969 [2024-11-15T10:00:29.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.969 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.969 Verification LBA range: start 0x0 length 0x2000 00:20:09.969 TLSTESTn1 : 10.01 6135.34 23.97 0.00 0.00 20832.80 3986.77 21845.33 00:20:09.969 [2024-11-15T10:00:29.496Z] =================================================================================================================== 00:20:09.969 [2024-11-15T10:00:29.496Z] Total : 6135.34 23.97 0.00 0.00 20832.80 3986.77 21845.33 00:20:09.969 { 00:20:09.969 "results": [ 00:20:09.969 { 00:20:09.969 "job": "TLSTESTn1", 00:20:09.969 "core_mask": "0x4", 00:20:09.969 "workload": "verify", 00:20:09.969 "status": "finished", 00:20:09.969 "verify_range": { 00:20:09.969 "start": 0, 00:20:09.969 "length": 8192 00:20:09.969 }, 00:20:09.969 "queue_depth": 128, 00:20:09.969 "io_size": 4096, 00:20:09.969 "runtime": 10.010204, 00:20:09.969 "iops": 6135.339499574634, 00:20:09.969 "mibps": 23.966169920213414, 00:20:09.969 "io_failed": 0, 00:20:09.969 "io_timeout": 0, 00:20:09.969 "avg_latency_us": 20832.80359168078, 00:20:09.969 "min_latency_us": 3986.7733333333335, 00:20:09.969 "max_latency_us": 21845.333333333332 00:20:09.969 } 00:20:09.969 ], 00:20:09.969 "core_count": 1 00:20:09.969 } 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 408816 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 408816 ']' 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 408816 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 408816 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 408816' 00:20:09.969 killing process with pid 408816 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 408816 00:20:09.969 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.969 00:20:09.969 Latency(us) 00:20:09.969 [2024-11-15T10:00:29.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.969 [2024-11-15T10:00:29.496Z] =================================================================================================================== 00:20:09.969 [2024-11-15T10:00:29.496Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 408816 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.0uhU7HRmUL 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0uhU7HRmUL 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0uhU7HRmUL 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0uhU7HRmUL 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0uhU7HRmUL 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=411158 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 411158 /var/tmp/bdevperf.sock 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 411158 ']' 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.969 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.969 [2024-11-15 11:00:29.347075] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:09.969 [2024-11-15 11:00:29.347133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411158 ] 00:20:09.969 [2024-11-15 11:00:29.428986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.969 [2024-11-15 11:00:29.455831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.913 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.913 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:10.913 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:10.913 [2024-11-15 11:00:30.303336] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0uhU7HRmUL': 0100666 00:20:10.913 [2024-11-15 11:00:30.303361] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:10.913 request: 00:20:10.913 { 00:20:10.913 "name": "key0", 00:20:10.913 "path": "/tmp/tmp.0uhU7HRmUL", 00:20:10.913 "method": "keyring_file_add_key", 00:20:10.913 "req_id": 1 00:20:10.913 } 00:20:10.913 Got JSON-RPC error response 00:20:10.913 response: 00:20:10.913 { 00:20:10.913 "code": -1, 00:20:10.913 "message": "Operation not permitted" 00:20:10.913 } 00:20:10.913 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.175 [2024-11-15 11:00:30.487879] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.175 [2024-11-15 11:00:30.487903] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:11.175 request: 00:20:11.175 { 00:20:11.175 "name": "TLSTEST", 00:20:11.175 "trtype": "tcp", 00:20:11.175 "traddr": "10.0.0.2", 00:20:11.175 "adrfam": "ipv4", 00:20:11.175 "trsvcid": "4420", 00:20:11.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.175 "prchk_reftag": false, 00:20:11.175 "prchk_guard": false, 00:20:11.175 "hdgst": false, 00:20:11.175 "ddgst": false, 00:20:11.175 "psk": "key0", 00:20:11.175 "allow_unrecognized_csi": false, 00:20:11.175 "method": "bdev_nvme_attach_controller", 00:20:11.175 "req_id": 1 00:20:11.175 } 00:20:11.175 Got JSON-RPC error response 00:20:11.175 response: 00:20:11.175 { 00:20:11.175 "code": -126, 00:20:11.175 "message": "Required key not available" 00:20:11.175 } 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 411158 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 411158 ']' 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 411158 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411158 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411158' 00:20:11.175 killing process with pid 411158 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 411158 00:20:11.175 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.175 00:20:11.175 Latency(us) 00:20:11.175 [2024-11-15T10:00:30.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.175 [2024-11-15T10:00:30.702Z] =================================================================================================================== 00:20:11.175 [2024-11-15T10:00:30.702Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 411158 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 408444 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 408444 ']' 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 408444 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.175 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 408444 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 408444' 00:20:11.436 killing process with pid 408444 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 408444 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 408444 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=411507 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 411507 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 411507 ']' 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.436 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.436 [2024-11-15 11:00:30.916895] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:11.436 [2024-11-15 11:00:30.916951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.697 [2024-11-15 11:00:31.006423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.697 [2024-11-15 11:00:31.041259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.697 [2024-11-15 11:00:31.041294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.697 [2024-11-15 11:00:31.041300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.697 [2024-11-15 11:00:31.041305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.697 [2024-11-15 11:00:31.041309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.697 [2024-11-15 11:00:31.041840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.0uhU7HRmUL 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0uhU7HRmUL 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.0uhU7HRmUL 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0uhU7HRmUL 00:20:12.268 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.530 [2024-11-15 11:00:31.921458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.530 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.791 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.791 [2024-11-15 11:00:32.274351] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.791 [2024-11-15 11:00:32.274558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.791 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:13.052 malloc0 00:20:13.052 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.313 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:13.313 [2024-11-15 11:00:32.781290] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0uhU7HRmUL': 0100666 00:20:13.314 [2024-11-15 11:00:32.781312] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:13.314 request: 00:20:13.314 { 00:20:13.314 "name": "key0", 00:20:13.314 "path": "/tmp/tmp.0uhU7HRmUL", 00:20:13.314 "method": "keyring_file_add_key", 00:20:13.314 "req_id": 1 00:20:13.314 } 00:20:13.314 Got JSON-RPC error response 00:20:13.314 response: 00:20:13.314 { 00:20:13.314 "code": -1, 00:20:13.314 "message": "Operation not permitted" 00:20:13.314 } 00:20:13.314 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.574 [2024-11-15 11:00:32.949722] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:13.574 [2024-11-15 11:00:32.949748] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:13.574 request: 00:20:13.574 { 00:20:13.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.574 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.574 "psk": "key0", 00:20:13.574 "method": "nvmf_subsystem_add_host", 00:20:13.574 "req_id": 1 00:20:13.574 } 00:20:13.574 Got JSON-RPC error response 00:20:13.574 response: 00:20:13.574 { 00:20:13.574 "code": -32603, 00:20:13.574 "message": "Internal error" 00:20:13.574 } 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 411507 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 411507 ']' 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 411507 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.574 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411507 00:20:13.574 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:13.574 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:13.574 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411507' 00:20:13.574 killing process with pid 411507 00:20:13.574 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 411507 00:20:13.574 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 411507 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.0uhU7HRmUL 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=411883 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 411883 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 411883 ']' 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:13.834 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.834 [2024-11-15 11:00:33.212176] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:13.834 [2024-11-15 11:00:33.212229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.834 [2024-11-15 11:00:33.303182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.834 [2024-11-15 11:00:33.333526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.834 [2024-11-15 11:00:33.333560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.834 [2024-11-15 11:00:33.333571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.834 [2024-11-15 11:00:33.333576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.834 [2024-11-15 11:00:33.333580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.834 [2024-11-15 11:00:33.334048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.0uhU7HRmUL 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0uhU7HRmUL 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.777 [2024-11-15 11:00:34.216019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.777 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:15.037 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:15.298 [2024-11-15 11:00:34.568893] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.298 [2024-11-15 11:00:34.569102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.298 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.298 malloc0 00:20:15.298 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.558 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:15.819 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.819 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=412268 00:20:15.819 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 412268 /var/tmp/bdevperf.sock 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 412268 ']' 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.820 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.820 [2024-11-15 11:00:35.299745] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:15.820 [2024-11-15 11:00:35.299798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412268 ] 00:20:16.081 [2024-11-15 11:00:35.385626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.081 [2024-11-15 11:00:35.414973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.652 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.652 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:16.652 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:16.913 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.174 [2024-11-15 11:00:36.443744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.174 TLSTESTn1 00:20:17.174 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:17.437 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:17.437 "subsystems": [ 00:20:17.437 { 00:20:17.437 "subsystem": "keyring", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.437 "method": "keyring_file_add_key", 00:20:17.437 "params": { 00:20:17.437 "name": "key0", 00:20:17.437 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:17.437 } 00:20:17.437 } 00:20:17.437 ] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "iobuf", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.437 "method": "iobuf_set_options", 00:20:17.437 "params": { 00:20:17.437 "small_pool_count": 8192, 00:20:17.437 "large_pool_count": 1024, 00:20:17.437 "small_bufsize": 8192, 00:20:17.437 "large_bufsize": 135168, 00:20:17.437 "enable_numa": false 00:20:17.437 } 00:20:17.437 } 00:20:17.437 ] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "sock", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.437 "method": "sock_set_default_impl", 00:20:17.437 "params": { 00:20:17.437 "impl_name": "posix" 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "sock_impl_set_options", 00:20:17.437 "params": { 00:20:17.437 "impl_name": "ssl", 00:20:17.437 "recv_buf_size": 4096, 00:20:17.437 "send_buf_size": 4096, 00:20:17.437 "enable_recv_pipe": true, 00:20:17.437 "enable_quickack": false, 00:20:17.437 "enable_placement_id": 0, 00:20:17.437 "enable_zerocopy_send_server": true, 00:20:17.437 "enable_zerocopy_send_client": false, 00:20:17.437 "zerocopy_threshold": 0, 00:20:17.437 "tls_version": 0, 00:20:17.437 "enable_ktls": false 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "sock_impl_set_options", 00:20:17.437 "params": { 00:20:17.437 "impl_name": "posix", 00:20:17.437 "recv_buf_size": 2097152, 00:20:17.437 "send_buf_size": 2097152, 00:20:17.437 "enable_recv_pipe": true, 00:20:17.437 "enable_quickack": false, 00:20:17.437 "enable_placement_id": 0, 00:20:17.437 "enable_zerocopy_send_server": true, 00:20:17.437 "enable_zerocopy_send_client": false, 00:20:17.437 "zerocopy_threshold": 0, 00:20:17.437 "tls_version": 0, 00:20:17.437 "enable_ktls": false 00:20:17.437 } 00:20:17.437 } 00:20:17.437 ] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "vmd", 00:20:17.437 "config": [] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "accel", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.437 "method": "accel_set_options", 00:20:17.437 "params": { 00:20:17.437 "small_cache_size": 128, 00:20:17.437 "large_cache_size": 16, 00:20:17.437 "task_count": 2048, 00:20:17.437 "sequence_count": 2048, 00:20:17.437 "buf_count": 2048 00:20:17.437 } 00:20:17.437 } 00:20:17.437 ] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "bdev", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.437 "method": "bdev_set_options", 00:20:17.437 "params": { 00:20:17.437 "bdev_io_pool_size": 65535, 00:20:17.437 "bdev_io_cache_size": 256, 00:20:17.437 "bdev_auto_examine": true, 00:20:17.437 "iobuf_small_cache_size": 128, 00:20:17.437 "iobuf_large_cache_size": 16 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "bdev_raid_set_options", 00:20:17.437 "params": { 00:20:17.437 "process_window_size_kb": 1024, 00:20:17.437 "process_max_bandwidth_mb_sec": 0 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "bdev_iscsi_set_options", 00:20:17.437 "params": { 00:20:17.437 "timeout_sec": 30 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "bdev_nvme_set_options", 00:20:17.437 "params": { 00:20:17.437 "action_on_timeout": "none", 00:20:17.437 "timeout_us": 0, 00:20:17.437 "timeout_admin_us": 0, 00:20:17.437 "keep_alive_timeout_ms": 10000, 00:20:17.437 "arbitration_burst": 0, 00:20:17.437 "low_priority_weight": 0, 00:20:17.437 "medium_priority_weight": 0, 00:20:17.437 "high_priority_weight": 0, 00:20:17.437 "nvme_adminq_poll_period_us": 10000, 00:20:17.437 "nvme_ioq_poll_period_us": 0, 00:20:17.437 "io_queue_requests": 0, 00:20:17.437 "delay_cmd_submit": true, 00:20:17.437 "transport_retry_count": 4, 00:20:17.437 "bdev_retry_count": 3, 00:20:17.437 "transport_ack_timeout": 0, 00:20:17.437 "ctrlr_loss_timeout_sec": 0, 00:20:17.437 "reconnect_delay_sec": 0, 00:20:17.437 "fast_io_fail_timeout_sec": 0, 00:20:17.437 "disable_auto_failback": false, 00:20:17.437 "generate_uuids": false, 00:20:17.437 "transport_tos": 0, 00:20:17.437 "nvme_error_stat": false, 00:20:17.437 "rdma_srq_size": 0, 00:20:17.437 "io_path_stat": false, 00:20:17.437 "allow_accel_sequence": false, 00:20:17.437 "rdma_max_cq_size": 0, 00:20:17.437 "rdma_cm_event_timeout_ms": 0, 00:20:17.437 "dhchap_digests": [ 00:20:17.437 "sha256", 00:20:17.437 "sha384", 00:20:17.437 "sha512" 00:20:17.437 ], 00:20:17.437 "dhchap_dhgroups": [ 00:20:17.437 "null", 00:20:17.437 "ffdhe2048", 00:20:17.437 "ffdhe3072", 00:20:17.437 "ffdhe4096", 00:20:17.437 "ffdhe6144", 00:20:17.437 "ffdhe8192" 00:20:17.437 ] 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "bdev_nvme_set_hotplug", 00:20:17.437 "params": { 00:20:17.437 "period_us": 100000, 00:20:17.437 "enable": false 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "bdev_malloc_create", 00:20:17.437 "params": { 00:20:17.437 "name": "malloc0", 00:20:17.437 "num_blocks": 8192, 00:20:17.437 "block_size": 4096, 00:20:17.437 "physical_block_size": 4096, 00:20:17.437 "uuid": "3e869eb4-bdfc-4615-b204-1a74309688b0", 00:20:17.437 "optimal_io_boundary": 0, 00:20:17.437 "md_size": 0, 00:20:17.437 "dif_type": 0, 00:20:17.437 "dif_is_head_of_md": false, 00:20:17.437 "dif_pi_format": 0 00:20:17.437 } 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "method": "bdev_wait_for_examine" 00:20:17.437 } 00:20:17.437 ] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "nbd", 00:20:17.437 "config": [] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "scheduler", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.437 "method": "framework_set_scheduler", 00:20:17.437 "params": { 00:20:17.437 "name": "static" 00:20:17.437 } 00:20:17.437 } 00:20:17.437 ] 00:20:17.437 }, 00:20:17.437 { 00:20:17.437 "subsystem": "nvmf", 00:20:17.437 "config": [ 00:20:17.437 { 00:20:17.438 "method": "nvmf_set_config", 00:20:17.438 "params": { 00:20:17.438 "discovery_filter": "match_any", 00:20:17.438 "admin_cmd_passthru": { 00:20:17.438 "identify_ctrlr": false 00:20:17.438 }, 00:20:17.438 "dhchap_digests": [ 00:20:17.438 "sha256", 00:20:17.438 "sha384", 00:20:17.438 "sha512" 00:20:17.438 ], 00:20:17.438 "dhchap_dhgroups": [ 00:20:17.438 "null", 00:20:17.438 "ffdhe2048", 00:20:17.438 "ffdhe3072", 00:20:17.438 "ffdhe4096", 00:20:17.438 "ffdhe6144", 00:20:17.438 "ffdhe8192" 00:20:17.438 ] 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_set_max_subsystems", 00:20:17.438 "params": { 00:20:17.438 "max_subsystems": 1024 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_set_crdt", 00:20:17.438 "params": { 00:20:17.438 "crdt1": 0, 00:20:17.438 "crdt2": 0, 00:20:17.438 "crdt3": 0 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_create_transport", 00:20:17.438 "params": { 00:20:17.438 "trtype": "TCP", 00:20:17.438 "max_queue_depth": 128, 00:20:17.438 "max_io_qpairs_per_ctrlr": 127, 00:20:17.438 "in_capsule_data_size": 4096, 00:20:17.438 "max_io_size": 131072, 00:20:17.438 "io_unit_size": 131072, 00:20:17.438 "max_aq_depth": 128, 00:20:17.438 "num_shared_buffers": 511, 00:20:17.438 "buf_cache_size": 4294967295, 00:20:17.438 "dif_insert_or_strip": false, 00:20:17.438 "zcopy": false, 00:20:17.438 "c2h_success": false, 00:20:17.438 "sock_priority": 0, 00:20:17.438 "abort_timeout_sec": 1, 00:20:17.438 "ack_timeout": 0, 00:20:17.438 "data_wr_pool_size": 0 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_create_subsystem", 00:20:17.438 "params": { 00:20:17.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.438 "allow_any_host": false, 00:20:17.438 "serial_number": "SPDK00000000000001", 00:20:17.438 "model_number": "SPDK bdev Controller", 00:20:17.438 "max_namespaces": 10, 00:20:17.438 "min_cntlid": 1, 00:20:17.438 "max_cntlid": 65519, 00:20:17.438 "ana_reporting": false 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_subsystem_add_host", 00:20:17.438 "params": { 00:20:17.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.438 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.438 "psk": "key0" 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_subsystem_add_ns", 00:20:17.438 "params": { 00:20:17.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.438 "namespace": { 00:20:17.438 "nsid": 1, 00:20:17.438 "bdev_name": "malloc0", 00:20:17.438 "nguid": "3E869EB4BDFC4615B2041A74309688B0", 00:20:17.438 "uuid": "3e869eb4-bdfc-4615-b204-1a74309688b0", 00:20:17.438 "no_auto_visible": false 00:20:17.438 } 00:20:17.438 } 00:20:17.438 }, 00:20:17.438 { 00:20:17.438 "method": "nvmf_subsystem_add_listener", 00:20:17.438 "params": { 00:20:17.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.438 "listen_address": { 00:20:17.438 "trtype": "TCP", 00:20:17.438 "adrfam": "IPv4", 00:20:17.438 "traddr": "10.0.0.2", 00:20:17.438 "trsvcid": "4420" 00:20:17.438 }, 00:20:17.438 "secure_channel": true 00:20:17.438 } 00:20:17.438 } 00:20:17.438 ] 00:20:17.438 } 00:20:17.438 ] 00:20:17.438 }' 00:20:17.438 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:17.700 "subsystems": [ 00:20:17.700 { 00:20:17.700 "subsystem": "keyring", 00:20:17.700 "config": [ 00:20:17.700 { 00:20:17.700 "method": "keyring_file_add_key", 00:20:17.700 "params": { 00:20:17.700 "name": "key0", 00:20:17.700 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:17.700 } 00:20:17.700 } 00:20:17.700 ] 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "subsystem": "iobuf", 00:20:17.700 "config": [ 00:20:17.700 { 00:20:17.700 "method": "iobuf_set_options", 00:20:17.700 "params": { 00:20:17.700 "small_pool_count": 8192, 00:20:17.700 "large_pool_count": 1024, 00:20:17.700 "small_bufsize": 8192, 00:20:17.700 "large_bufsize": 135168, 00:20:17.700 "enable_numa": false 00:20:17.700 } 00:20:17.700 } 00:20:17.700 ] 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "subsystem": "sock", 00:20:17.700 "config": [ 00:20:17.700 { 00:20:17.700 "method": "sock_set_default_impl", 00:20:17.700 "params": { 00:20:17.700 "impl_name": "posix" 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "sock_impl_set_options", 00:20:17.700 "params": { 00:20:17.700 "impl_name": "ssl", 00:20:17.700 "recv_buf_size": 4096, 00:20:17.700 "send_buf_size": 4096, 00:20:17.700 "enable_recv_pipe": true, 00:20:17.700 "enable_quickack": false, 00:20:17.700 "enable_placement_id": 0, 00:20:17.700 "enable_zerocopy_send_server": true, 00:20:17.700 "enable_zerocopy_send_client": false, 00:20:17.700 "zerocopy_threshold": 0, 00:20:17.700 "tls_version": 0, 00:20:17.700 "enable_ktls": false 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "sock_impl_set_options", 00:20:17.700 "params": { 00:20:17.700 "impl_name": "posix", 00:20:17.700 "recv_buf_size": 2097152, 00:20:17.700 "send_buf_size": 2097152, 00:20:17.700 "enable_recv_pipe": true, 00:20:17.700 "enable_quickack": false, 00:20:17.700 "enable_placement_id": 0, 00:20:17.700 "enable_zerocopy_send_server": true, 00:20:17.700 "enable_zerocopy_send_client": false, 00:20:17.700 "zerocopy_threshold": 0, 00:20:17.700 "tls_version": 0, 00:20:17.700 "enable_ktls": false 00:20:17.700 } 00:20:17.700 } 00:20:17.700 ] 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "subsystem": "vmd", 00:20:17.700 "config": [] 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "subsystem": "accel", 00:20:17.700 "config": [ 00:20:17.700 { 00:20:17.700 "method": "accel_set_options", 00:20:17.700 "params": { 00:20:17.700 "small_cache_size": 128, 00:20:17.700 "large_cache_size": 16, 00:20:17.700 "task_count": 2048, 00:20:17.700 "sequence_count": 2048, 00:20:17.700 "buf_count": 2048 00:20:17.700 } 00:20:17.700 } 00:20:17.700 ] 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "subsystem": "bdev", 00:20:17.700 "config": [ 00:20:17.700 { 00:20:17.700 "method": "bdev_set_options", 00:20:17.700 "params": { 00:20:17.700 "bdev_io_pool_size": 65535, 00:20:17.700 "bdev_io_cache_size": 256, 00:20:17.700 "bdev_auto_examine": true, 00:20:17.700 "iobuf_small_cache_size": 128, 00:20:17.700 "iobuf_large_cache_size": 16 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "bdev_raid_set_options", 00:20:17.700 "params": { 00:20:17.700 "process_window_size_kb": 1024, 00:20:17.700 "process_max_bandwidth_mb_sec": 0 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "bdev_iscsi_set_options", 00:20:17.700 "params": { 00:20:17.700 "timeout_sec": 30 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "bdev_nvme_set_options", 00:20:17.700 "params": { 00:20:17.700 "action_on_timeout": "none", 00:20:17.700 "timeout_us": 0, 00:20:17.700 "timeout_admin_us": 0, 00:20:17.700 "keep_alive_timeout_ms": 10000, 00:20:17.700 "arbitration_burst": 0, 00:20:17.700 "low_priority_weight": 0, 00:20:17.700 "medium_priority_weight": 0, 00:20:17.700 "high_priority_weight": 0, 00:20:17.700 "nvme_adminq_poll_period_us": 10000, 00:20:17.700 "nvme_ioq_poll_period_us": 0, 00:20:17.700 "io_queue_requests": 512, 00:20:17.700 "delay_cmd_submit": true, 00:20:17.700 "transport_retry_count": 4, 00:20:17.700 "bdev_retry_count": 3, 00:20:17.700 "transport_ack_timeout": 0, 00:20:17.700 "ctrlr_loss_timeout_sec": 0, 00:20:17.700 "reconnect_delay_sec": 0, 00:20:17.700 "fast_io_fail_timeout_sec": 0, 00:20:17.700 "disable_auto_failback": false, 00:20:17.700 "generate_uuids": false, 00:20:17.700 "transport_tos": 0, 00:20:17.700 "nvme_error_stat": false, 00:20:17.700 "rdma_srq_size": 0, 00:20:17.700 "io_path_stat": false, 00:20:17.700 "allow_accel_sequence": false, 00:20:17.700 "rdma_max_cq_size": 0, 00:20:17.700 "rdma_cm_event_timeout_ms": 0, 00:20:17.700 "dhchap_digests": [ 00:20:17.700 "sha256", 00:20:17.700 "sha384", 00:20:17.700 "sha512" 00:20:17.700 ], 00:20:17.700 "dhchap_dhgroups": [ 00:20:17.700 "null", 00:20:17.700 "ffdhe2048", 00:20:17.700 "ffdhe3072", 00:20:17.700 "ffdhe4096", 00:20:17.700 "ffdhe6144", 00:20:17.700 "ffdhe8192" 00:20:17.700 ] 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "bdev_nvme_attach_controller", 00:20:17.700 "params": { 00:20:17.700 "name": "TLSTEST", 00:20:17.700 "trtype": "TCP", 00:20:17.700 "adrfam": "IPv4", 00:20:17.700 "traddr": "10.0.0.2", 00:20:17.700 "trsvcid": "4420", 00:20:17.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.700 "prchk_reftag": false, 00:20:17.700 "prchk_guard": false, 00:20:17.700 "ctrlr_loss_timeout_sec": 0, 00:20:17.700 "reconnect_delay_sec": 0, 00:20:17.700 "fast_io_fail_timeout_sec": 0, 00:20:17.700 "psk": "key0", 00:20:17.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.700 "hdgst": false, 00:20:17.700 "ddgst": false, 00:20:17.700 "multipath": "multipath" 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "bdev_nvme_set_hotplug", 00:20:17.700 "params": { 00:20:17.700 "period_us": 100000, 00:20:17.700 "enable": false 00:20:17.700 } 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "method": "bdev_wait_for_examine" 00:20:17.700 } 00:20:17.700 ] 00:20:17.700 }, 00:20:17.700 { 00:20:17.700 "subsystem": "nbd", 00:20:17.700 "config": [] 00:20:17.700 } 00:20:17.700 ] 00:20:17.700 }' 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 412268 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 412268 ']' 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 412268 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412268 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412268' 00:20:17.700 killing process with pid 412268 00:20:17.700 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 412268 00:20:17.700 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.700 00:20:17.700 Latency(us) 00:20:17.700 [2024-11-15T10:00:37.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.700 [2024-11-15T10:00:37.227Z] =================================================================================================================== 00:20:17.700 [2024-11-15T10:00:37.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 412268 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 411883 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 411883 ']' 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 411883 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.701 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411883 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411883' 00:20:17.963 killing process with pid 411883 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 411883 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 411883 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.963 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:17.963 "subsystems": [ 00:20:17.963 { 00:20:17.963 "subsystem": "keyring", 00:20:17.963 "config": [ 00:20:17.963 { 00:20:17.963 "method": "keyring_file_add_key", 00:20:17.963 "params": { 00:20:17.963 "name": "key0", 00:20:17.963 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:17.963 } 00:20:17.963 } 00:20:17.963 ] 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "subsystem": "iobuf", 00:20:17.963 "config": [ 00:20:17.963 { 00:20:17.963 "method": "iobuf_set_options", 00:20:17.963 "params": { 00:20:17.963 "small_pool_count": 8192, 00:20:17.963 "large_pool_count": 1024, 00:20:17.963 "small_bufsize": 8192, 00:20:17.963 "large_bufsize": 135168, 00:20:17.963 "enable_numa": false 00:20:17.963 } 00:20:17.963 } 00:20:17.963 ] 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "subsystem": "sock", 00:20:17.963 "config": [ 00:20:17.963 { 00:20:17.963 "method": "sock_set_default_impl", 00:20:17.963 "params": { 00:20:17.963 "impl_name": "posix" 00:20:17.963 } 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "method": "sock_impl_set_options", 00:20:17.963 "params": { 00:20:17.963 "impl_name": "ssl", 00:20:17.963 "recv_buf_size": 4096, 00:20:17.963 "send_buf_size": 4096, 00:20:17.963 "enable_recv_pipe": true, 00:20:17.963 "enable_quickack": false, 00:20:17.963 "enable_placement_id": 0, 00:20:17.963 "enable_zerocopy_send_server": true, 00:20:17.963 "enable_zerocopy_send_client": false, 00:20:17.963 "zerocopy_threshold": 0, 00:20:17.963 "tls_version": 0, 00:20:17.963 "enable_ktls": false 00:20:17.963 } 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "method": "sock_impl_set_options", 00:20:17.963 "params": { 00:20:17.963 "impl_name": "posix", 00:20:17.963 "recv_buf_size": 2097152, 00:20:17.963 "send_buf_size": 2097152, 00:20:17.963 "enable_recv_pipe": true, 00:20:17.963 "enable_quickack": false, 00:20:17.963 "enable_placement_id": 0, 00:20:17.963 "enable_zerocopy_send_server": true, 00:20:17.963 "enable_zerocopy_send_client": false, 00:20:17.963 "zerocopy_threshold": 0, 00:20:17.963 "tls_version": 0, 00:20:17.963 "enable_ktls": false 00:20:17.963 } 00:20:17.963 } 00:20:17.963 ] 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "subsystem": "vmd", 00:20:17.963 "config": [] 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "subsystem": "accel", 00:20:17.963 "config": [ 00:20:17.963 { 00:20:17.963 "method": "accel_set_options", 00:20:17.963 "params": { 00:20:17.963 "small_cache_size": 128, 00:20:17.963 "large_cache_size": 16, 00:20:17.963 "task_count": 2048, 00:20:17.963 "sequence_count": 2048, 00:20:17.963 "buf_count": 2048 00:20:17.963 } 00:20:17.963 } 00:20:17.963 ] 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "subsystem": "bdev", 00:20:17.963 "config": [ 00:20:17.963 { 00:20:17.963 "method": "bdev_set_options", 00:20:17.963 "params": { 00:20:17.963 "bdev_io_pool_size": 65535, 00:20:17.963 "bdev_io_cache_size": 256, 00:20:17.963 "bdev_auto_examine": true, 00:20:17.963 "iobuf_small_cache_size": 128, 00:20:17.963 "iobuf_large_cache_size": 16 00:20:17.963 } 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "method": "bdev_raid_set_options", 00:20:17.963 "params": { 00:20:17.963 "process_window_size_kb": 1024, 00:20:17.963 "process_max_bandwidth_mb_sec": 0 00:20:17.963 } 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "method": "bdev_iscsi_set_options", 00:20:17.963 "params": { 00:20:17.963 "timeout_sec": 30 00:20:17.963 } 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "method": "bdev_nvme_set_options", 00:20:17.963 "params": { 00:20:17.963 "action_on_timeout": "none", 00:20:17.963 "timeout_us": 0, 00:20:17.963 "timeout_admin_us": 0, 00:20:17.963 "keep_alive_timeout_ms": 10000, 00:20:17.963 "arbitration_burst": 0, 00:20:17.963 "low_priority_weight": 0, 00:20:17.963 "medium_priority_weight": 0, 00:20:17.963 "high_priority_weight": 0, 00:20:17.963 "nvme_adminq_poll_period_us": 10000, 00:20:17.963 "nvme_ioq_poll_period_us": 0, 00:20:17.963 "io_queue_requests": 0, 00:20:17.963 "delay_cmd_submit": true, 00:20:17.963 "transport_retry_count": 4, 00:20:17.963 "bdev_retry_count": 3, 00:20:17.963 "transport_ack_timeout": 0, 00:20:17.963 "ctrlr_loss_timeout_sec": 0, 00:20:17.963 "reconnect_delay_sec": 0, 00:20:17.963 "fast_io_fail_timeout_sec": 0, 00:20:17.963 "disable_auto_failback": false, 00:20:17.963 "generate_uuids": false, 00:20:17.963 "transport_tos": 0, 00:20:17.963 "nvme_error_stat": false, 00:20:17.963 "rdma_srq_size": 0, 00:20:17.963 "io_path_stat": false, 00:20:17.963 "allow_accel_sequence": false, 00:20:17.963 "rdma_max_cq_size": 0, 00:20:17.963 "rdma_cm_event_timeout_ms": 0, 00:20:17.963 "dhchap_digests": [ 00:20:17.963 "sha256", 00:20:17.963 "sha384", 00:20:17.963 "sha512" 00:20:17.963 ], 00:20:17.963 "dhchap_dhgroups": [ 00:20:17.963 "null", 00:20:17.963 "ffdhe2048", 00:20:17.963 "ffdhe3072", 00:20:17.963 "ffdhe4096", 00:20:17.963 "ffdhe6144", 00:20:17.963 "ffdhe8192" 00:20:17.963 ] 00:20:17.963 } 00:20:17.963 }, 00:20:17.963 { 00:20:17.963 "method": "bdev_nvme_set_hotplug", 00:20:17.964 "params": { 00:20:17.964 "period_us": 100000, 00:20:17.964 "enable": false 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "bdev_malloc_create", 00:20:17.964 "params": { 00:20:17.964 "name": "malloc0", 00:20:17.964 "num_blocks": 8192, 00:20:17.964 "block_size": 4096, 00:20:17.964 "physical_block_size": 4096, 00:20:17.964 "uuid": "3e869eb4-bdfc-4615-b204-1a74309688b0", 00:20:17.964 "optimal_io_boundary": 0, 00:20:17.964 "md_size": 0, 00:20:17.964 "dif_type": 0, 00:20:17.964 "dif_is_head_of_md": false, 00:20:17.964 "dif_pi_format": 0 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "bdev_wait_for_examine" 00:20:17.964 } 00:20:17.964 ] 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "subsystem": "nbd", 00:20:17.964 "config": [] 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "subsystem": "scheduler", 00:20:17.964 "config": [ 00:20:17.964 { 00:20:17.964 "method": "framework_set_scheduler", 00:20:17.964 "params": { 00:20:17.964 "name": "static" 00:20:17.964 } 00:20:17.964 } 00:20:17.964 ] 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "subsystem": "nvmf", 00:20:17.964 "config": [ 00:20:17.964 { 00:20:17.964 "method": "nvmf_set_config", 00:20:17.964 "params": { 00:20:17.964 "discovery_filter": "match_any", 00:20:17.964 "admin_cmd_passthru": { 00:20:17.964 "identify_ctrlr": false 00:20:17.964 }, 00:20:17.964 "dhchap_digests": [ 00:20:17.964 "sha256", 00:20:17.964 "sha384", 00:20:17.964 "sha512" 00:20:17.964 ], 00:20:17.964 "dhchap_dhgroups": [ 00:20:17.964 "null", 00:20:17.964 "ffdhe2048", 00:20:17.964 "ffdhe3072", 00:20:17.964 "ffdhe4096", 00:20:17.964 "ffdhe6144", 00:20:17.964 "ffdhe8192" 00:20:17.964 ] 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_set_max_subsystems", 00:20:17.964 "params": { 00:20:17.964 "max_subsystems": 1024 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_set_crdt", 00:20:17.964 "params": { 00:20:17.964 "crdt1": 0, 00:20:17.964 "crdt2": 0, 00:20:17.964 "crdt3": 0 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_create_transport", 00:20:17.964 "params": { 00:20:17.964 "trtype": "TCP", 00:20:17.964 "max_queue_depth": 128, 00:20:17.964 "max_io_qpairs_per_ctrlr": 127, 00:20:17.964 "in_capsule_data_size": 4096, 00:20:17.964 "max_io_size": 131072, 00:20:17.964 "io_unit_size": 131072, 00:20:17.964 "max_aq_depth": 128, 00:20:17.964 "num_shared_buffers": 511, 00:20:17.964 "buf_cache_size": 4294967295, 00:20:17.964 "dif_insert_or_strip": false, 00:20:17.964 "zcopy": false, 00:20:17.964 "c2h_success": false, 00:20:17.964 "sock_priority": 0, 00:20:17.964 "abort_timeout_sec": 1, 00:20:17.964 "ack_timeout": 0, 00:20:17.964 "data_wr_pool_size": 0 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_create_subsystem", 00:20:17.964 "params": { 00:20:17.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.964 "allow_any_host": false, 00:20:17.964 "serial_number": "SPDK00000000000001", 00:20:17.964 "model_number": "SPDK bdev Controller", 00:20:17.964 "max_namespaces": 10, 00:20:17.964 "min_cntlid": 1, 00:20:17.964 "max_cntlid": 65519, 00:20:17.964 "ana_reporting": false 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_subsystem_add_host", 00:20:17.964 "params": { 00:20:17.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.964 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.964 "psk": "key0" 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_subsystem_add_ns", 00:20:17.964 "params": { 00:20:17.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.964 "namespace": { 00:20:17.964 "nsid": 1, 00:20:17.964 "bdev_name": "malloc0", 00:20:17.964 "nguid": "3E869EB4BDFC4615B2041A74309688B0", 00:20:17.964 "uuid": "3e869eb4-bdfc-4615-b204-1a74309688b0", 00:20:17.964 "no_auto_visible": false 00:20:17.964 } 00:20:17.964 } 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "method": "nvmf_subsystem_add_listener", 00:20:17.964 "params": { 00:20:17.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.964 "listen_address": { 00:20:17.964 "trtype": "TCP", 00:20:17.964 "adrfam": "IPv4", 00:20:17.964 "traddr": "10.0.0.2", 00:20:17.964 "trsvcid": "4420" 00:20:17.964 }, 00:20:17.964 "secure_channel": true 00:20:17.964 } 00:20:17.964 } 00:20:17.964 ] 00:20:17.964 } 00:20:17.964 ] 00:20:17.964 }' 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=412823 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 412823 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 412823 ']' 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:17.964 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.965 [2024-11-15 11:00:37.447307] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:17.965 [2024-11-15 11:00:37.447363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.225 [2024-11-15 11:00:37.535545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.225 [2024-11-15 11:00:37.564160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.225 [2024-11-15 11:00:37.564188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.225 [2024-11-15 11:00:37.564194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.225 [2024-11-15 11:00:37.564198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.225 [2024-11-15 11:00:37.564203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.225 [2024-11-15 11:00:37.564720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.485 [2024-11-15 11:00:37.758637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.485 [2024-11-15 11:00:37.790667] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.485 [2024-11-15 11:00:37.790873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.745 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:18.745 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:18.745 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.745 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:18.745 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=412955 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 412955 /var/tmp/bdevperf.sock 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 412955 ']' 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:19.006 "subsystems": [ 00:20:19.006 { 00:20:19.006 "subsystem": "keyring", 00:20:19.007 "config": [ 00:20:19.007 { 00:20:19.007 "method": "keyring_file_add_key", 00:20:19.007 "params": { 00:20:19.007 "name": "key0", 00:20:19.007 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:19.007 } 00:20:19.007 } 00:20:19.007 ] 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "subsystem": "iobuf", 00:20:19.007 "config": [ 00:20:19.007 { 00:20:19.007 "method": "iobuf_set_options", 00:20:19.007 "params": { 00:20:19.007 "small_pool_count": 8192, 00:20:19.007 "large_pool_count": 1024, 00:20:19.007 "small_bufsize": 8192, 00:20:19.007 "large_bufsize": 135168, 00:20:19.007 "enable_numa": false 00:20:19.007 } 00:20:19.007 } 00:20:19.007 ] 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "subsystem": "sock", 00:20:19.007 "config": [ 00:20:19.007 { 00:20:19.007 "method": "sock_set_default_impl", 00:20:19.007 "params": { 00:20:19.007 "impl_name": "posix" 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "sock_impl_set_options", 00:20:19.007 "params": { 00:20:19.007 "impl_name": "ssl", 00:20:19.007 "recv_buf_size": 4096, 00:20:19.007 "send_buf_size": 4096, 00:20:19.007 "enable_recv_pipe": true, 00:20:19.007 "enable_quickack": false, 00:20:19.007 "enable_placement_id": 0, 00:20:19.007 "enable_zerocopy_send_server": true, 00:20:19.007 "enable_zerocopy_send_client": false, 00:20:19.007 "zerocopy_threshold": 0, 00:20:19.007 "tls_version": 0, 00:20:19.007 "enable_ktls": false 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "sock_impl_set_options", 00:20:19.007 "params": { 00:20:19.007 "impl_name": "posix", 00:20:19.007 "recv_buf_size": 2097152, 00:20:19.007 "send_buf_size": 2097152, 00:20:19.007 "enable_recv_pipe": true, 00:20:19.007 "enable_quickack": false, 00:20:19.007 "enable_placement_id": 0, 00:20:19.007 "enable_zerocopy_send_server": true, 00:20:19.007 "enable_zerocopy_send_client": false, 00:20:19.007 "zerocopy_threshold": 0, 00:20:19.007 "tls_version": 0, 00:20:19.007 "enable_ktls": false 00:20:19.007 } 00:20:19.007 } 00:20:19.007 ] 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "subsystem": "vmd", 00:20:19.007 "config": [] 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "subsystem": "accel", 00:20:19.007 "config": [ 00:20:19.007 { 00:20:19.007 "method": "accel_set_options", 00:20:19.007 "params": { 00:20:19.007 "small_cache_size": 128, 00:20:19.007 "large_cache_size": 16, 00:20:19.007 "task_count": 2048, 00:20:19.007 "sequence_count": 2048, 00:20:19.007 "buf_count": 2048 00:20:19.007 } 00:20:19.007 } 00:20:19.007 ] 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "subsystem": "bdev", 00:20:19.007 "config": [ 00:20:19.007 { 00:20:19.007 "method": "bdev_set_options", 00:20:19.007 "params": { 00:20:19.007 "bdev_io_pool_size": 65535, 00:20:19.007 "bdev_io_cache_size": 256, 00:20:19.007 "bdev_auto_examine": true, 00:20:19.007 "iobuf_small_cache_size": 128, 00:20:19.007 "iobuf_large_cache_size": 16 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "bdev_raid_set_options", 00:20:19.007 "params": { 00:20:19.007 "process_window_size_kb": 1024, 00:20:19.007 "process_max_bandwidth_mb_sec": 0 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "bdev_iscsi_set_options", 00:20:19.007 "params": { 00:20:19.007 "timeout_sec": 30 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "bdev_nvme_set_options", 00:20:19.007 "params": { 00:20:19.007 "action_on_timeout": "none", 00:20:19.007 "timeout_us": 0, 00:20:19.007 "timeout_admin_us": 0, 00:20:19.007 "keep_alive_timeout_ms": 10000, 00:20:19.007 "arbitration_burst": 0, 00:20:19.007 "low_priority_weight": 0, 00:20:19.007 "medium_priority_weight": 0, 00:20:19.007 "high_priority_weight": 0, 00:20:19.007 "nvme_adminq_poll_period_us": 10000, 00:20:19.007 "nvme_ioq_poll_period_us": 0, 00:20:19.007 "io_queue_requests": 512, 00:20:19.007 "delay_cmd_submit": true, 00:20:19.007 "transport_retry_count": 4, 00:20:19.007 "bdev_retry_count": 3, 00:20:19.007 "transport_ack_timeout": 0, 00:20:19.007 "ctrlr_loss_timeout_sec": 0, 00:20:19.007 "reconnect_delay_sec": 0, 00:20:19.007 "fast_io_fail_timeout_sec": 0, 00:20:19.007 "disable_auto_failback": false, 00:20:19.007 "generate_uuids": false, 00:20:19.007 "transport_tos": 0, 00:20:19.007 "nvme_error_stat": false, 00:20:19.007 "rdma_srq_size": 0, 00:20:19.007 "io_path_stat": false, 00:20:19.007 "allow_accel_sequence": false, 00:20:19.007 "rdma_max_cq_size": 0, 00:20:19.007 "rdma_cm_event_timeout_ms": 0, 00:20:19.007 "dhchap_digests": [ 00:20:19.007 "sha256", 00:20:19.007 "sha384", 00:20:19.007 "sha512" 00:20:19.007 ], 00:20:19.007 "dhchap_dhgroups": [ 00:20:19.007 "null", 00:20:19.007 "ffdhe2048", 00:20:19.007 "ffdhe3072", 00:20:19.007 "ffdhe4096", 00:20:19.007 "ffdhe6144", 00:20:19.007 "ffdhe8192" 00:20:19.007 ] 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "bdev_nvme_attach_controller", 00:20:19.007 "params": { 00:20:19.007 "name": "TLSTEST", 00:20:19.007 "trtype": "TCP", 00:20:19.007 "adrfam": "IPv4", 00:20:19.007 "traddr": "10.0.0.2", 00:20:19.007 "trsvcid": "4420", 00:20:19.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.007 "prchk_reftag": false, 00:20:19.007 "prchk_guard": false, 00:20:19.007 "ctrlr_loss_timeout_sec": 0, 00:20:19.007 "reconnect_delay_sec": 0, 00:20:19.007 "fast_io_fail_timeout_sec": 0, 00:20:19.007 "psk": "key0", 00:20:19.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.007 "hdgst": false, 00:20:19.007 "ddgst": false, 00:20:19.007 "multipath": "multipath" 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "bdev_nvme_set_hotplug", 00:20:19.007 "params": { 00:20:19.007 "period_us": 100000, 00:20:19.007 "enable": false 00:20:19.007 } 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "method": "bdev_wait_for_examine" 00:20:19.007 } 00:20:19.007 ] 00:20:19.007 }, 00:20:19.007 { 00:20:19.007 "subsystem": "nbd", 00:20:19.007 "config": [] 00:20:19.007 } 00:20:19.007 ] 00:20:19.007 }' 00:20:19.007 [2024-11-15 11:00:38.359395] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:19.007 [2024-11-15 11:00:38.359447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412955 ] 00:20:19.007 [2024-11-15 11:00:38.443741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.008 [2024-11-15 11:00:38.472574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.269 [2024-11-15 11:00:38.607638] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.839 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:19.839 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:19.840 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.840 Running I/O for 10 seconds... 00:20:21.737 5523.00 IOPS, 21.57 MiB/s [2024-11-15T10:00:42.645Z] 5143.00 IOPS, 20.09 MiB/s [2024-11-15T10:00:43.583Z] 4815.33 IOPS, 18.81 MiB/s [2024-11-15T10:00:44.521Z] 4991.75 IOPS, 19.50 MiB/s [2024-11-15T10:00:45.462Z] 5095.60 IOPS, 19.90 MiB/s [2024-11-15T10:00:46.402Z] 5199.67 IOPS, 20.31 MiB/s [2024-11-15T10:00:47.341Z] 5235.86 IOPS, 20.45 MiB/s [2024-11-15T10:00:48.282Z] 5285.00 IOPS, 20.64 MiB/s [2024-11-15T10:00:49.664Z] 5295.22 IOPS, 20.68 MiB/s [2024-11-15T10:00:49.664Z] 5288.00 IOPS, 20.66 MiB/s 00:20:30.137 Latency(us) 00:20:30.137 [2024-11-15T10:00:49.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.137 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.137 Verification LBA range: start 0x0 length 0x2000 00:20:30.137 TLSTESTn1 : 10.02 5288.52 20.66 0.00 0.00 24162.03 5215.57 45875.20 00:20:30.137 [2024-11-15T10:00:49.664Z] =================================================================================================================== 00:20:30.137 [2024-11-15T10:00:49.664Z] Total : 5288.52 20.66 0.00 0.00 24162.03 5215.57 45875.20 00:20:30.137 { 00:20:30.137 "results": [ 00:20:30.137 { 00:20:30.137 "job": "TLSTESTn1", 00:20:30.137 "core_mask": "0x4", 00:20:30.137 "workload": "verify", 00:20:30.137 "status": "finished", 00:20:30.137 "verify_range": { 00:20:30.137 "start": 0, 00:20:30.137 "length": 8192 00:20:30.137 }, 00:20:30.137 "queue_depth": 128, 00:20:30.138 "io_size": 4096, 00:20:30.138 "runtime": 10.023024, 00:20:30.138 "iops": 5288.523703026152, 00:20:30.138 "mibps": 20.658295714945908, 00:20:30.138 "io_failed": 0, 00:20:30.138 "io_timeout": 0, 00:20:30.138 "avg_latency_us": 24162.034858792238, 00:20:30.138 "min_latency_us": 5215.573333333334, 00:20:30.138 "max_latency_us": 45875.2 00:20:30.138 } 00:20:30.138 ], 00:20:30.138 "core_count": 1 00:20:30.138 } 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 412955 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 412955 ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 412955 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412955 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412955' 00:20:30.138 killing process with pid 412955 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 412955 00:20:30.138 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.138 00:20:30.138 Latency(us) 00:20:30.138 [2024-11-15T10:00:49.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.138 [2024-11-15T10:00:49.665Z] =================================================================================================================== 00:20:30.138 [2024-11-15T10:00:49.665Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 412955 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 412823 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 412823 ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 412823 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412823 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412823' 00:20:30.138 killing process with pid 412823 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 412823 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 412823 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=415265 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 415265 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 415265 ']' 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.138 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.398 [2024-11-15 11:00:49.712265] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:30.398 [2024-11-15 11:00:49.712322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.398 [2024-11-15 11:00:49.810502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.398 [2024-11-15 11:00:49.854532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.398 [2024-11-15 11:00:49.854598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.398 [2024-11-15 11:00:49.854607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.399 [2024-11-15 11:00:49.854614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.399 [2024-11-15 11:00:49.854620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.399 [2024-11-15 11:00:49.855379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.0uhU7HRmUL 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0uhU7HRmUL 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.341 [2024-11-15 11:00:50.739036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.341 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.601 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.862 [2024-11-15 11:00:51.140056] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.862 [2024-11-15 11:00:51.140416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.862 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:31.862 malloc0 00:20:31.862 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.123 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:32.384 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=415665 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 415665 /var/tmp/bdevperf.sock 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 415665 ']' 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:32.645 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.645 [2024-11-15 11:00:52.030377] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:32.645 [2024-11-15 11:00:52.030453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415665 ] 00:20:32.645 [2024-11-15 11:00:52.117191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.645 [2024-11-15 11:00:52.150932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.905 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:32.905 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:32.905 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:32.905 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:33.165 [2024-11-15 11:00:52.561131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.165 nvme0n1 00:20:33.165 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.426 Running I/O for 1 seconds... 00:20:34.370 4592.00 IOPS, 17.94 MiB/s 00:20:34.370 Latency(us) 00:20:34.370 [2024-11-15T10:00:53.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.370 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:34.370 Verification LBA range: start 0x0 length 0x2000 00:20:34.370 nvme0n1 : 1.03 4595.14 17.95 0.00 0.00 27619.74 6444.37 67283.63 00:20:34.370 [2024-11-15T10:00:53.897Z] =================================================================================================================== 00:20:34.370 [2024-11-15T10:00:53.897Z] Total : 4595.14 17.95 0.00 0.00 27619.74 6444.37 67283.63 00:20:34.370 { 00:20:34.370 "results": [ 00:20:34.370 { 00:20:34.370 "job": "nvme0n1", 00:20:34.370 "core_mask": "0x2", 00:20:34.370 "workload": "verify", 00:20:34.370 "status": "finished", 00:20:34.370 "verify_range": { 00:20:34.370 "start": 0, 00:20:34.370 "length": 8192 00:20:34.370 }, 00:20:34.370 "queue_depth": 128, 00:20:34.370 "io_size": 4096, 00:20:34.370 "runtime": 1.027172, 00:20:34.370 "iops": 4595.140833278166, 00:20:34.370 "mibps": 17.949768879992835, 00:20:34.370 "io_failed": 0, 00:20:34.370 "io_timeout": 0, 00:20:34.370 "avg_latency_us": 27619.744542372882, 00:20:34.370 "min_latency_us": 6444.373333333333, 00:20:34.370 "max_latency_us": 67283.62666666666 00:20:34.370 } 00:20:34.370 ], 00:20:34.370 "core_count": 1 00:20:34.370 } 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 415665 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 415665 ']' 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 415665 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 415665 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 415665' 00:20:34.370 killing process with pid 415665 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 415665 00:20:34.370 Received shutdown signal, test time was about 1.000000 seconds 00:20:34.370 00:20:34.370 Latency(us) 00:20:34.370 [2024-11-15T10:00:53.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.370 [2024-11-15T10:00:53.897Z] =================================================================================================================== 00:20:34.370 [2024-11-15T10:00:53.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.370 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 415665 00:20:34.632 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 415265 00:20:34.632 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 415265 ']' 00:20:34.632 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 415265 00:20:34.632 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:34.632 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.632 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 415265 00:20:34.632 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:34.632 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:34.632 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 415265' 00:20:34.632 killing process with pid 415265 00:20:34.632 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 415265 00:20:34.632 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 415265 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=416019 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 416019 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 416019 ']' 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.894 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.894 [2024-11-15 11:00:54.224007] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:34.894 [2024-11-15 11:00:54.224064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.894 [2024-11-15 11:00:54.317064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.894 [2024-11-15 11:00:54.345807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.894 [2024-11-15 11:00:54.345840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.894 [2024-11-15 11:00:54.345845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.894 [2024-11-15 11:00:54.345850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.894 [2024-11-15 11:00:54.345854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.894 [2024-11-15 11:00:54.346349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.835 [2024-11-15 11:00:55.067175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.835 malloc0 00:20:35.835 [2024-11-15 11:00:55.093119] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.835 [2024-11-15 11:00:55.093324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=416362 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 416362 /var/tmp/bdevperf.sock 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 416362 ']' 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.835 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.835 [2024-11-15 11:00:55.171551] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:35.835 [2024-11-15 11:00:55.171602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416362 ] 00:20:35.835 [2024-11-15 11:00:55.252987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.835 [2024-11-15 11:00:55.282657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.778 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.778 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:36.778 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0uhU7HRmUL 00:20:36.778 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:36.778 [2024-11-15 11:00:56.262983] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.039 nvme0n1 00:20:37.039 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.039 Running I/O for 1 seconds... 00:20:37.982 5113.00 IOPS, 19.97 MiB/s 00:20:37.982 Latency(us) 00:20:37.982 [2024-11-15T10:00:57.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.982 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.982 Verification LBA range: start 0x0 length 0x2000 00:20:37.982 nvme0n1 : 1.04 5031.82 19.66 0.00 0.00 24989.68 5625.17 37573.97 00:20:37.982 [2024-11-15T10:00:57.509Z] =================================================================================================================== 00:20:37.982 [2024-11-15T10:00:57.509Z] Total : 5031.82 19.66 0.00 0.00 24989.68 5625.17 37573.97 00:20:37.982 { 00:20:37.982 "results": [ 00:20:37.982 { 00:20:37.982 "job": "nvme0n1", 00:20:37.982 "core_mask": "0x2", 00:20:37.982 "workload": "verify", 00:20:37.982 "status": "finished", 00:20:37.982 "verify_range": { 00:20:37.982 "start": 0, 00:20:37.982 "length": 8192 00:20:37.982 }, 00:20:37.982 "queue_depth": 128, 00:20:37.982 "io_size": 4096, 00:20:37.982 "runtime": 1.04177, 00:20:37.982 "iops": 5031.820843372338, 00:20:37.982 "mibps": 19.655550169423194, 00:20:37.982 "io_failed": 0, 00:20:37.982 "io_timeout": 0, 00:20:37.982 "avg_latency_us": 24989.678809614652, 00:20:37.982 "min_latency_us": 5625.173333333333, 00:20:37.982 "max_latency_us": 37573.973333333335 00:20:37.982 } 00:20:37.982 ], 00:20:37.982 "core_count": 1 00:20:37.982 } 00:20:38.243 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:38.243 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.243 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.243 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.243 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:38.243 "subsystems": [ 00:20:38.243 { 00:20:38.243 "subsystem": "keyring", 00:20:38.243 "config": [ 00:20:38.243 { 00:20:38.243 "method": "keyring_file_add_key", 00:20:38.243 "params": { 00:20:38.243 "name": "key0", 00:20:38.243 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:38.243 } 00:20:38.243 } 00:20:38.243 ] 00:20:38.243 }, 00:20:38.243 { 00:20:38.243 "subsystem": "iobuf", 00:20:38.243 "config": [ 00:20:38.243 { 00:20:38.243 "method": "iobuf_set_options", 00:20:38.243 "params": { 00:20:38.243 "small_pool_count": 8192, 00:20:38.243 "large_pool_count": 1024, 00:20:38.243 "small_bufsize": 8192, 00:20:38.243 "large_bufsize": 135168, 00:20:38.243 "enable_numa": false 00:20:38.243 } 00:20:38.243 } 00:20:38.243 ] 00:20:38.243 }, 00:20:38.244 { 00:20:38.244 "subsystem": "sock", 00:20:38.244 "config": [ 00:20:38.244 { 00:20:38.244 "method": "sock_set_default_impl", 00:20:38.244 "params": { 00:20:38.244 "impl_name": "posix" 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "sock_impl_set_options", 00:20:38.244 "params": { 00:20:38.244 "impl_name": "ssl", 00:20:38.244 "recv_buf_size": 4096, 00:20:38.244 "send_buf_size": 4096, 00:20:38.244 "enable_recv_pipe": true, 00:20:38.244 "enable_quickack": false, 00:20:38.244 "enable_placement_id": 0, 00:20:38.244 "enable_zerocopy_send_server": true, 00:20:38.244 "enable_zerocopy_send_client": false, 00:20:38.244 "zerocopy_threshold": 0, 00:20:38.244 "tls_version": 0, 00:20:38.244 "enable_ktls": false 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "sock_impl_set_options", 00:20:38.244 "params": { 00:20:38.244 "impl_name": "posix", 00:20:38.244 "recv_buf_size": 2097152, 00:20:38.244 "send_buf_size": 2097152, 00:20:38.244 "enable_recv_pipe": true, 00:20:38.244 "enable_quickack": false, 00:20:38.244 "enable_placement_id": 0, 00:20:38.244 "enable_zerocopy_send_server": true, 00:20:38.244 "enable_zerocopy_send_client": false, 00:20:38.244 "zerocopy_threshold": 0, 00:20:38.244 "tls_version": 0, 00:20:38.244 "enable_ktls": false 00:20:38.244 } 00:20:38.244 } 00:20:38.244 ] 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "subsystem": "vmd", 00:20:38.244 "config": [] 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "subsystem": "accel", 00:20:38.244 "config": [ 00:20:38.244 { 00:20:38.244 "method": "accel_set_options", 00:20:38.244 "params": { 00:20:38.244 "small_cache_size": 128, 00:20:38.244 "large_cache_size": 16, 00:20:38.244 "task_count": 2048, 00:20:38.244 "sequence_count": 2048, 00:20:38.244 "buf_count": 2048 00:20:38.244 } 00:20:38.244 } 00:20:38.244 ] 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "subsystem": "bdev", 00:20:38.244 "config": [ 00:20:38.244 { 00:20:38.244 "method": "bdev_set_options", 00:20:38.244 "params": { 00:20:38.244 "bdev_io_pool_size": 65535, 00:20:38.244 "bdev_io_cache_size": 256, 00:20:38.244 "bdev_auto_examine": true, 00:20:38.244 "iobuf_small_cache_size": 128, 00:20:38.244 "iobuf_large_cache_size": 16 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "bdev_raid_set_options", 00:20:38.244 "params": { 00:20:38.244 "process_window_size_kb": 1024, 00:20:38.244 "process_max_bandwidth_mb_sec": 0 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "bdev_iscsi_set_options", 00:20:38.244 "params": { 00:20:38.244 "timeout_sec": 30 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "bdev_nvme_set_options", 00:20:38.244 "params": { 00:20:38.244 "action_on_timeout": "none", 00:20:38.244 "timeout_us": 0, 00:20:38.244 "timeout_admin_us": 0, 00:20:38.244 "keep_alive_timeout_ms": 10000, 00:20:38.244 "arbitration_burst": 0, 00:20:38.244 "low_priority_weight": 0, 00:20:38.244 "medium_priority_weight": 0, 00:20:38.244 "high_priority_weight": 0, 00:20:38.244 "nvme_adminq_poll_period_us": 10000, 00:20:38.244 "nvme_ioq_poll_period_us": 0, 00:20:38.244 "io_queue_requests": 0, 00:20:38.244 "delay_cmd_submit": true, 00:20:38.244 "transport_retry_count": 4, 00:20:38.244 "bdev_retry_count": 3, 00:20:38.244 "transport_ack_timeout": 0, 00:20:38.244 "ctrlr_loss_timeout_sec": 0, 00:20:38.244 "reconnect_delay_sec": 0, 00:20:38.244 "fast_io_fail_timeout_sec": 0, 00:20:38.244 "disable_auto_failback": false, 00:20:38.244 "generate_uuids": false, 00:20:38.244 "transport_tos": 0, 00:20:38.244 "nvme_error_stat": false, 00:20:38.244 "rdma_srq_size": 0, 00:20:38.244 "io_path_stat": false, 00:20:38.244 "allow_accel_sequence": false, 00:20:38.244 "rdma_max_cq_size": 0, 00:20:38.244 "rdma_cm_event_timeout_ms": 0, 00:20:38.244 "dhchap_digests": [ 00:20:38.244 "sha256", 00:20:38.244 "sha384", 00:20:38.244 "sha512" 00:20:38.244 ], 00:20:38.244 "dhchap_dhgroups": [ 00:20:38.244 "null", 00:20:38.244 "ffdhe2048", 00:20:38.244 "ffdhe3072", 00:20:38.244 "ffdhe4096", 00:20:38.244 "ffdhe6144", 00:20:38.244 "ffdhe8192" 00:20:38.244 ] 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "bdev_nvme_set_hotplug", 00:20:38.244 "params": { 00:20:38.244 "period_us": 100000, 00:20:38.244 "enable": false 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "bdev_malloc_create", 00:20:38.244 "params": { 00:20:38.244 "name": "malloc0", 00:20:38.244 "num_blocks": 8192, 00:20:38.244 "block_size": 4096, 00:20:38.244 "physical_block_size": 4096, 00:20:38.244 "uuid": "b2e23d1f-3548-4d00-b28b-dd93cf193a23", 00:20:38.244 "optimal_io_boundary": 0, 00:20:38.244 "md_size": 0, 00:20:38.244 "dif_type": 0, 00:20:38.244 "dif_is_head_of_md": false, 00:20:38.244 "dif_pi_format": 0 00:20:38.244 } 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "method": "bdev_wait_for_examine" 00:20:38.244 } 00:20:38.244 ] 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "subsystem": "nbd", 00:20:38.244 "config": [] 00:20:38.244 }, 00:20:38.244 { 00:20:38.244 "subsystem": "scheduler", 00:20:38.244 "config": [ 00:20:38.244 { 00:20:38.245 "method": "framework_set_scheduler", 00:20:38.245 "params": { 00:20:38.245 "name": "static" 00:20:38.245 } 00:20:38.245 } 00:20:38.245 ] 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "subsystem": "nvmf", 00:20:38.245 "config": [ 00:20:38.245 { 00:20:38.245 "method": "nvmf_set_config", 00:20:38.245 "params": { 00:20:38.245 "discovery_filter": "match_any", 00:20:38.245 "admin_cmd_passthru": { 00:20:38.245 "identify_ctrlr": false 00:20:38.245 }, 00:20:38.245 "dhchap_digests": [ 00:20:38.245 "sha256", 00:20:38.245 "sha384", 00:20:38.245 "sha512" 00:20:38.245 ], 00:20:38.245 "dhchap_dhgroups": [ 00:20:38.245 "null", 00:20:38.245 "ffdhe2048", 00:20:38.245 "ffdhe3072", 00:20:38.245 "ffdhe4096", 00:20:38.245 "ffdhe6144", 00:20:38.245 "ffdhe8192" 00:20:38.245 ] 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_set_max_subsystems", 00:20:38.245 "params": { 00:20:38.245 "max_subsystems": 1024 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_set_crdt", 00:20:38.245 "params": { 00:20:38.245 "crdt1": 0, 00:20:38.245 "crdt2": 0, 00:20:38.245 "crdt3": 0 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_create_transport", 00:20:38.245 "params": { 00:20:38.245 "trtype": "TCP", 00:20:38.245 "max_queue_depth": 128, 00:20:38.245 "max_io_qpairs_per_ctrlr": 127, 00:20:38.245 "in_capsule_data_size": 4096, 00:20:38.245 "max_io_size": 131072, 00:20:38.245 "io_unit_size": 131072, 00:20:38.245 "max_aq_depth": 128, 00:20:38.245 "num_shared_buffers": 511, 00:20:38.245 "buf_cache_size": 4294967295, 00:20:38.245 "dif_insert_or_strip": false, 00:20:38.245 "zcopy": false, 00:20:38.245 "c2h_success": false, 00:20:38.245 "sock_priority": 0, 00:20:38.245 "abort_timeout_sec": 1, 00:20:38.245 "ack_timeout": 0, 00:20:38.245 "data_wr_pool_size": 0 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_create_subsystem", 00:20:38.245 "params": { 00:20:38.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.245 "allow_any_host": false, 00:20:38.245 "serial_number": "00000000000000000000", 00:20:38.245 "model_number": "SPDK bdev Controller", 00:20:38.245 "max_namespaces": 32, 00:20:38.245 "min_cntlid": 1, 00:20:38.245 "max_cntlid": 65519, 00:20:38.245 "ana_reporting": false 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_subsystem_add_host", 00:20:38.245 "params": { 00:20:38.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.245 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.245 "psk": "key0" 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_subsystem_add_ns", 00:20:38.245 "params": { 00:20:38.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.245 "namespace": { 00:20:38.245 "nsid": 1, 00:20:38.245 "bdev_name": "malloc0", 00:20:38.245 "nguid": "B2E23D1F35484D00B28BDD93CF193A23", 00:20:38.245 "uuid": "b2e23d1f-3548-4d00-b28b-dd93cf193a23", 00:20:38.245 "no_auto_visible": false 00:20:38.245 } 00:20:38.245 } 00:20:38.245 }, 00:20:38.245 { 00:20:38.245 "method": "nvmf_subsystem_add_listener", 00:20:38.245 "params": { 00:20:38.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.245 "listen_address": { 00:20:38.245 "trtype": "TCP", 00:20:38.245 "adrfam": "IPv4", 00:20:38.245 "traddr": "10.0.0.2", 00:20:38.245 "trsvcid": "4420" 00:20:38.245 }, 00:20:38.245 "secure_channel": false, 00:20:38.245 "sock_impl": "ssl" 00:20:38.245 } 00:20:38.245 } 00:20:38.245 ] 00:20:38.245 } 00:20:38.245 ] 00:20:38.245 }' 00:20:38.245 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:38.507 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:38.507 "subsystems": [ 00:20:38.507 { 00:20:38.507 "subsystem": "keyring", 00:20:38.507 "config": [ 00:20:38.507 { 00:20:38.507 "method": "keyring_file_add_key", 00:20:38.507 "params": { 00:20:38.507 "name": "key0", 00:20:38.507 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:38.507 } 00:20:38.507 } 00:20:38.507 ] 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "subsystem": "iobuf", 00:20:38.507 "config": [ 00:20:38.507 { 00:20:38.507 "method": "iobuf_set_options", 00:20:38.507 "params": { 00:20:38.507 "small_pool_count": 8192, 00:20:38.507 "large_pool_count": 1024, 00:20:38.507 "small_bufsize": 8192, 00:20:38.507 "large_bufsize": 135168, 00:20:38.507 "enable_numa": false 00:20:38.507 } 00:20:38.507 } 00:20:38.507 ] 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "subsystem": "sock", 00:20:38.507 "config": [ 00:20:38.507 { 00:20:38.507 "method": "sock_set_default_impl", 00:20:38.507 "params": { 00:20:38.507 "impl_name": "posix" 00:20:38.507 } 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "method": "sock_impl_set_options", 00:20:38.507 "params": { 00:20:38.507 "impl_name": "ssl", 00:20:38.507 "recv_buf_size": 4096, 00:20:38.507 "send_buf_size": 4096, 00:20:38.507 "enable_recv_pipe": true, 00:20:38.507 "enable_quickack": false, 00:20:38.507 "enable_placement_id": 0, 00:20:38.507 "enable_zerocopy_send_server": true, 00:20:38.507 "enable_zerocopy_send_client": false, 00:20:38.507 "zerocopy_threshold": 0, 00:20:38.507 "tls_version": 0, 00:20:38.507 "enable_ktls": false 00:20:38.507 } 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "method": "sock_impl_set_options", 00:20:38.507 "params": { 00:20:38.507 "impl_name": "posix", 00:20:38.507 "recv_buf_size": 2097152, 00:20:38.507 "send_buf_size": 2097152, 00:20:38.507 "enable_recv_pipe": true, 00:20:38.507 "enable_quickack": false, 00:20:38.507 "enable_placement_id": 0, 00:20:38.507 "enable_zerocopy_send_server": true, 00:20:38.507 "enable_zerocopy_send_client": false, 00:20:38.507 "zerocopy_threshold": 0, 00:20:38.507 "tls_version": 0, 00:20:38.507 "enable_ktls": false 00:20:38.507 } 00:20:38.507 } 00:20:38.507 ] 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "subsystem": "vmd", 00:20:38.507 "config": [] 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "subsystem": "accel", 00:20:38.507 "config": [ 00:20:38.507 { 00:20:38.507 "method": "accel_set_options", 00:20:38.507 "params": { 00:20:38.507 "small_cache_size": 128, 00:20:38.507 "large_cache_size": 16, 00:20:38.507 "task_count": 2048, 00:20:38.507 "sequence_count": 2048, 00:20:38.507 "buf_count": 2048 00:20:38.507 } 00:20:38.507 } 00:20:38.507 ] 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "subsystem": "bdev", 00:20:38.507 "config": [ 00:20:38.507 { 00:20:38.507 "method": "bdev_set_options", 00:20:38.507 "params": { 00:20:38.507 "bdev_io_pool_size": 65535, 00:20:38.507 "bdev_io_cache_size": 256, 00:20:38.507 "bdev_auto_examine": true, 00:20:38.507 "iobuf_small_cache_size": 128, 00:20:38.507 "iobuf_large_cache_size": 16 00:20:38.507 } 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "method": "bdev_raid_set_options", 00:20:38.507 "params": { 00:20:38.507 "process_window_size_kb": 1024, 00:20:38.507 "process_max_bandwidth_mb_sec": 0 00:20:38.507 } 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "method": "bdev_iscsi_set_options", 00:20:38.507 "params": { 00:20:38.507 "timeout_sec": 30 00:20:38.507 } 00:20:38.507 }, 00:20:38.507 { 00:20:38.507 "method": "bdev_nvme_set_options", 00:20:38.507 "params": { 00:20:38.507 "action_on_timeout": "none", 00:20:38.507 "timeout_us": 0, 00:20:38.507 "timeout_admin_us": 0, 00:20:38.507 "keep_alive_timeout_ms": 10000, 00:20:38.507 "arbitration_burst": 0, 00:20:38.507 "low_priority_weight": 0, 00:20:38.507 "medium_priority_weight": 0, 00:20:38.507 "high_priority_weight": 0, 00:20:38.507 "nvme_adminq_poll_period_us": 10000, 00:20:38.507 "nvme_ioq_poll_period_us": 0, 00:20:38.507 "io_queue_requests": 512, 00:20:38.507 "delay_cmd_submit": true, 00:20:38.507 "transport_retry_count": 4, 00:20:38.507 "bdev_retry_count": 3, 00:20:38.507 "transport_ack_timeout": 0, 00:20:38.508 "ctrlr_loss_timeout_sec": 0, 00:20:38.508 "reconnect_delay_sec": 0, 00:20:38.508 "fast_io_fail_timeout_sec": 0, 00:20:38.508 "disable_auto_failback": false, 00:20:38.508 "generate_uuids": false, 00:20:38.508 "transport_tos": 0, 00:20:38.508 "nvme_error_stat": false, 00:20:38.508 "rdma_srq_size": 0, 00:20:38.508 "io_path_stat": false, 00:20:38.508 "allow_accel_sequence": false, 00:20:38.508 "rdma_max_cq_size": 0, 00:20:38.508 "rdma_cm_event_timeout_ms": 0, 00:20:38.508 "dhchap_digests": [ 00:20:38.508 "sha256", 00:20:38.508 "sha384", 00:20:38.508 "sha512" 00:20:38.508 ], 00:20:38.508 "dhchap_dhgroups": [ 00:20:38.508 "null", 00:20:38.508 "ffdhe2048", 00:20:38.508 "ffdhe3072", 00:20:38.508 "ffdhe4096", 00:20:38.508 "ffdhe6144", 00:20:38.508 "ffdhe8192" 00:20:38.508 ] 00:20:38.508 } 00:20:38.508 }, 00:20:38.508 { 00:20:38.508 "method": "bdev_nvme_attach_controller", 00:20:38.508 "params": { 00:20:38.508 "name": "nvme0", 00:20:38.508 "trtype": "TCP", 00:20:38.508 "adrfam": "IPv4", 00:20:38.508 "traddr": "10.0.0.2", 00:20:38.508 "trsvcid": "4420", 00:20:38.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.508 "prchk_reftag": false, 00:20:38.508 "prchk_guard": false, 00:20:38.508 "ctrlr_loss_timeout_sec": 0, 00:20:38.508 "reconnect_delay_sec": 0, 00:20:38.508 "fast_io_fail_timeout_sec": 0, 00:20:38.508 "psk": "key0", 00:20:38.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.508 "hdgst": false, 00:20:38.508 "ddgst": false, 00:20:38.508 "multipath": "multipath" 00:20:38.508 } 00:20:38.508 }, 00:20:38.508 { 00:20:38.508 "method": "bdev_nvme_set_hotplug", 00:20:38.508 "params": { 00:20:38.508 "period_us": 100000, 00:20:38.508 "enable": false 00:20:38.508 } 00:20:38.508 }, 00:20:38.508 { 00:20:38.508 "method": "bdev_enable_histogram", 00:20:38.508 "params": { 00:20:38.508 "name": "nvme0n1", 00:20:38.508 "enable": true 00:20:38.508 } 00:20:38.508 }, 00:20:38.508 { 00:20:38.508 "method": "bdev_wait_for_examine" 00:20:38.508 } 00:20:38.508 ] 00:20:38.508 }, 00:20:38.508 { 00:20:38.508 "subsystem": "nbd", 00:20:38.508 "config": [] 00:20:38.508 } 00:20:38.508 ] 00:20:38.508 }' 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 416362 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 416362 ']' 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 416362 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 416362 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 416362' 00:20:38.508 killing process with pid 416362 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 416362 00:20:38.508 Received shutdown signal, test time was about 1.000000 seconds 00:20:38.508 00:20:38.508 Latency(us) 00:20:38.508 [2024-11-15T10:00:58.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.508 [2024-11-15T10:00:58.035Z] =================================================================================================================== 00:20:38.508 [2024-11-15T10:00:58.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.508 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 416362 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 416019 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 416019 ']' 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 416019 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 416019 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 416019' 00:20:38.769 killing process with pid 416019 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 416019 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 416019 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.769 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:38.769 "subsystems": [ 00:20:38.769 { 00:20:38.769 "subsystem": "keyring", 00:20:38.769 "config": [ 00:20:38.769 { 00:20:38.769 "method": "keyring_file_add_key", 00:20:38.769 "params": { 00:20:38.769 "name": "key0", 00:20:38.769 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:38.769 } 00:20:38.769 } 00:20:38.769 ] 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "subsystem": "iobuf", 00:20:38.769 "config": [ 00:20:38.769 { 00:20:38.769 "method": "iobuf_set_options", 00:20:38.769 "params": { 00:20:38.769 "small_pool_count": 8192, 00:20:38.769 "large_pool_count": 1024, 00:20:38.769 "small_bufsize": 8192, 00:20:38.769 "large_bufsize": 135168, 00:20:38.769 "enable_numa": false 00:20:38.769 } 00:20:38.769 } 00:20:38.769 ] 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "subsystem": "sock", 00:20:38.769 "config": [ 00:20:38.769 { 00:20:38.769 "method": "sock_set_default_impl", 00:20:38.769 "params": { 00:20:38.769 "impl_name": "posix" 00:20:38.769 } 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "method": "sock_impl_set_options", 00:20:38.769 "params": { 00:20:38.769 "impl_name": "ssl", 00:20:38.769 "recv_buf_size": 4096, 00:20:38.769 "send_buf_size": 4096, 00:20:38.769 "enable_recv_pipe": true, 00:20:38.769 "enable_quickack": false, 00:20:38.769 "enable_placement_id": 0, 00:20:38.769 "enable_zerocopy_send_server": true, 00:20:38.769 "enable_zerocopy_send_client": false, 00:20:38.769 "zerocopy_threshold": 0, 00:20:38.769 "tls_version": 0, 00:20:38.769 "enable_ktls": false 00:20:38.769 } 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "method": "sock_impl_set_options", 00:20:38.769 "params": { 00:20:38.769 "impl_name": "posix", 00:20:38.769 "recv_buf_size": 2097152, 00:20:38.769 "send_buf_size": 2097152, 00:20:38.769 "enable_recv_pipe": true, 00:20:38.769 "enable_quickack": false, 00:20:38.769 "enable_placement_id": 0, 00:20:38.769 "enable_zerocopy_send_server": true, 00:20:38.769 "enable_zerocopy_send_client": false, 00:20:38.769 "zerocopy_threshold": 0, 00:20:38.769 "tls_version": 0, 00:20:38.769 "enable_ktls": false 00:20:38.769 } 00:20:38.769 } 00:20:38.769 ] 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "subsystem": "vmd", 00:20:38.769 "config": [] 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "subsystem": "accel", 00:20:38.769 "config": [ 00:20:38.769 { 00:20:38.769 "method": "accel_set_options", 00:20:38.769 "params": { 00:20:38.769 "small_cache_size": 128, 00:20:38.769 "large_cache_size": 16, 00:20:38.769 "task_count": 2048, 00:20:38.769 "sequence_count": 2048, 00:20:38.769 "buf_count": 2048 00:20:38.769 } 00:20:38.769 } 00:20:38.769 ] 00:20:38.769 }, 00:20:38.769 { 00:20:38.769 "subsystem": "bdev", 00:20:38.769 "config": [ 00:20:38.770 { 00:20:38.770 "method": "bdev_set_options", 00:20:38.770 "params": { 00:20:38.770 "bdev_io_pool_size": 65535, 00:20:38.770 "bdev_io_cache_size": 256, 00:20:38.770 "bdev_auto_examine": true, 00:20:38.770 "iobuf_small_cache_size": 128, 00:20:38.770 "iobuf_large_cache_size": 16 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "bdev_raid_set_options", 00:20:38.770 "params": { 00:20:38.770 "process_window_size_kb": 1024, 00:20:38.770 "process_max_bandwidth_mb_sec": 0 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "bdev_iscsi_set_options", 00:20:38.770 "params": { 00:20:38.770 "timeout_sec": 30 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "bdev_nvme_set_options", 00:20:38.770 "params": { 00:20:38.770 "action_on_timeout": "none", 00:20:38.770 "timeout_us": 0, 00:20:38.770 "timeout_admin_us": 0, 00:20:38.770 "keep_alive_timeout_ms": 10000, 00:20:38.770 "arbitration_burst": 0, 00:20:38.770 "low_priority_weight": 0, 00:20:38.770 "medium_priority_weight": 0, 00:20:38.770 "high_priority_weight": 0, 00:20:38.770 "nvme_adminq_poll_period_us": 10000, 00:20:38.770 "nvme_ioq_poll_period_us": 0, 00:20:38.770 "io_queue_requests": 0, 00:20:38.770 "delay_cmd_submit": true, 00:20:38.770 "transport_retry_count": 4, 00:20:38.770 "bdev_retry_count": 3, 00:20:38.770 "transport_ack_timeout": 0, 00:20:38.770 "ctrlr_loss_timeout_sec": 0, 00:20:38.770 "reconnect_delay_sec": 0, 00:20:38.770 "fast_io_fail_timeout_sec": 0, 00:20:38.770 "disable_auto_failback": false, 00:20:38.770 "generate_uuids": false, 00:20:38.770 "transport_tos": 0, 00:20:38.770 "nvme_error_stat": false, 00:20:38.770 "rdma_srq_size": 0, 00:20:38.770 "io_path_stat": false, 00:20:38.770 "allow_accel_sequence": false, 00:20:38.770 "rdma_max_cq_size": 0, 00:20:38.770 "rdma_cm_event_timeout_ms": 0, 00:20:38.770 "dhchap_digests": [ 00:20:38.770 "sha256", 00:20:38.770 "sha384", 00:20:38.770 "sha512" 00:20:38.770 ], 00:20:38.770 "dhchap_dhgroups": [ 00:20:38.770 "null", 00:20:38.770 "ffdhe2048", 00:20:38.770 "ffdhe3072", 00:20:38.770 "ffdhe4096", 00:20:38.770 "ffdhe6144", 00:20:38.770 "ffdhe8192" 00:20:38.770 ] 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "bdev_nvme_set_hotplug", 00:20:38.770 "params": { 00:20:38.770 "period_us": 100000, 00:20:38.770 "enable": false 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "bdev_malloc_create", 00:20:38.770 "params": { 00:20:38.770 "name": "malloc0", 00:20:38.770 "num_blocks": 8192, 00:20:38.770 "block_size": 4096, 00:20:38.770 "physical_block_size": 4096, 00:20:38.770 "uuid": "b2e23d1f-3548-4d00-b28b-dd93cf193a23", 00:20:38.770 "optimal_io_boundary": 0, 00:20:38.770 "md_size": 0, 00:20:38.770 "dif_type": 0, 00:20:38.770 "dif_is_head_of_md": false, 00:20:38.770 "dif_pi_format": 0 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "bdev_wait_for_examine" 00:20:38.770 } 00:20:38.770 ] 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "subsystem": "nbd", 00:20:38.770 "config": [] 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "subsystem": "scheduler", 00:20:38.770 "config": [ 00:20:38.770 { 00:20:38.770 "method": "framework_set_scheduler", 00:20:38.770 "params": { 00:20:38.770 "name": "static" 00:20:38.770 } 00:20:38.770 } 00:20:38.770 ] 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "subsystem": "nvmf", 00:20:38.770 "config": [ 00:20:38.770 { 00:20:38.770 "method": "nvmf_set_config", 00:20:38.770 "params": { 00:20:38.770 "discovery_filter": "match_any", 00:20:38.770 "admin_cmd_passthru": { 00:20:38.770 "identify_ctrlr": false 00:20:38.770 }, 00:20:38.770 "dhchap_digests": [ 00:20:38.770 "sha256", 00:20:38.770 "sha384", 00:20:38.770 "sha512" 00:20:38.770 ], 00:20:38.770 "dhchap_dhgroups": [ 00:20:38.770 "null", 00:20:38.770 "ffdhe2048", 00:20:38.770 "ffdhe3072", 00:20:38.770 "ffdhe4096", 00:20:38.770 "ffdhe6144", 00:20:38.770 "ffdhe8192" 00:20:38.770 ] 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_set_max_subsystems", 00:20:38.770 "params": { 00:20:38.770 "max_subsystems": 1024 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_set_crdt", 00:20:38.770 "params": { 00:20:38.770 "crdt1": 0, 00:20:38.770 "crdt2": 0, 00:20:38.770 "crdt3": 0 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_create_transport", 00:20:38.770 "params": { 00:20:38.770 "trtype": "TCP", 00:20:38.770 "max_queue_depth": 128, 00:20:38.770 "max_io_qpairs_per_ctrlr": 127, 00:20:38.770 "in_capsule_data_size": 4096, 00:20:38.770 "max_io_size": 131072, 00:20:38.770 "io_unit_size": 131072, 00:20:38.770 "max_aq_depth": 128, 00:20:38.770 "num_shared_buffers": 511, 00:20:38.770 "buf_cache_size": 4294967295, 00:20:38.770 "dif_insert_or_strip": false, 00:20:38.770 "zcopy": false, 00:20:38.770 "c2h_success": false, 00:20:38.770 "sock_priority": 0, 00:20:38.770 "abort_timeout_sec": 1, 00:20:38.770 "ack_timeout": 0, 00:20:38.770 "data_wr_pool_size": 0 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_create_subsystem", 00:20:38.770 "params": { 00:20:38.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.770 "allow_any_host": false, 00:20:38.770 "serial_number": "00000000000000000000", 00:20:38.770 "model_number": "SPDK bdev Controller", 00:20:38.770 "max_namespaces": 32, 00:20:38.770 "min_cntlid": 1, 00:20:38.770 "max_cntlid": 65519, 00:20:38.770 "ana_reporting": false 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_subsystem_add_host", 00:20:38.770 "params": { 00:20:38.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.770 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.770 "psk": "key0" 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_subsystem_add_ns", 00:20:38.770 "params": { 00:20:38.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.770 "namespace": { 00:20:38.770 "nsid": 1, 00:20:38.770 "bdev_name": "malloc0", 00:20:38.770 "nguid": "B2E23D1F35484D00B28BDD93CF193A23", 00:20:38.770 "uuid": "b2e23d1f-3548-4d00-b28b-dd93cf193a23", 00:20:38.770 "no_auto_visible": false 00:20:38.770 } 00:20:38.770 } 00:20:38.770 }, 00:20:38.770 { 00:20:38.770 "method": "nvmf_subsystem_add_listener", 00:20:38.770 "params": { 00:20:38.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.770 "listen_address": { 00:20:38.770 "trtype": "TCP", 00:20:38.770 "adrfam": "IPv4", 00:20:38.770 "traddr": "10.0.0.2", 00:20:38.770 "trsvcid": "4420" 00:20:38.770 }, 00:20:38.770 "secure_channel": false, 00:20:38.770 "sock_impl": "ssl" 00:20:38.770 } 00:20:38.770 } 00:20:38.770 ] 00:20:38.770 } 00:20:38.770 ] 00:20:38.770 }' 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=417004 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 417004 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 417004 ']' 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.770 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.770 [2024-11-15 11:00:58.283427] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:38.770 [2024-11-15 11:00:58.283479] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.030 [2024-11-15 11:00:58.371460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.030 [2024-11-15 11:00:58.400665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.030 [2024-11-15 11:00:58.400695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.030 [2024-11-15 11:00:58.400700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.030 [2024-11-15 11:00:58.400705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.030 [2024-11-15 11:00:58.400709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.030 [2024-11-15 11:00:58.401212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.290 [2024-11-15 11:00:58.595284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.290 [2024-11-15 11:00:58.627316] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.290 [2024-11-15 11:00:58.627528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.551 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.551 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:39.551 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.551 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.551 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=417091 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 417091 /var/tmp/bdevperf.sock 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 417091 ']' 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.813 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:39.813 "subsystems": [ 00:20:39.813 { 00:20:39.813 "subsystem": "keyring", 00:20:39.813 "config": [ 00:20:39.813 { 00:20:39.813 "method": "keyring_file_add_key", 00:20:39.813 "params": { 00:20:39.813 "name": "key0", 00:20:39.813 "path": "/tmp/tmp.0uhU7HRmUL" 00:20:39.813 } 00:20:39.813 } 00:20:39.813 ] 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "subsystem": "iobuf", 00:20:39.813 "config": [ 00:20:39.813 { 00:20:39.813 "method": "iobuf_set_options", 00:20:39.813 "params": { 00:20:39.813 "small_pool_count": 8192, 00:20:39.813 "large_pool_count": 1024, 00:20:39.813 "small_bufsize": 8192, 00:20:39.813 "large_bufsize": 135168, 00:20:39.813 "enable_numa": false 00:20:39.813 } 00:20:39.813 } 00:20:39.813 ] 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "subsystem": "sock", 00:20:39.813 "config": [ 00:20:39.813 { 00:20:39.813 "method": "sock_set_default_impl", 00:20:39.813 "params": { 00:20:39.813 "impl_name": "posix" 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "sock_impl_set_options", 00:20:39.813 "params": { 00:20:39.813 "impl_name": "ssl", 00:20:39.813 "recv_buf_size": 4096, 00:20:39.813 "send_buf_size": 4096, 00:20:39.813 "enable_recv_pipe": true, 00:20:39.813 "enable_quickack": false, 00:20:39.813 "enable_placement_id": 0, 00:20:39.813 "enable_zerocopy_send_server": true, 00:20:39.813 "enable_zerocopy_send_client": false, 00:20:39.813 "zerocopy_threshold": 0, 00:20:39.813 "tls_version": 0, 00:20:39.813 "enable_ktls": false 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "sock_impl_set_options", 00:20:39.813 "params": { 00:20:39.813 "impl_name": "posix", 00:20:39.813 "recv_buf_size": 2097152, 00:20:39.813 "send_buf_size": 2097152, 00:20:39.813 "enable_recv_pipe": true, 00:20:39.813 "enable_quickack": false, 00:20:39.813 "enable_placement_id": 0, 00:20:39.813 "enable_zerocopy_send_server": true, 00:20:39.813 "enable_zerocopy_send_client": false, 00:20:39.813 "zerocopy_threshold": 0, 00:20:39.813 "tls_version": 0, 00:20:39.813 "enable_ktls": false 00:20:39.813 } 00:20:39.813 } 00:20:39.813 ] 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "subsystem": "vmd", 00:20:39.813 "config": [] 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "subsystem": "accel", 00:20:39.813 "config": [ 00:20:39.813 { 00:20:39.813 "method": "accel_set_options", 00:20:39.813 "params": { 00:20:39.813 "small_cache_size": 128, 00:20:39.813 "large_cache_size": 16, 00:20:39.813 "task_count": 2048, 00:20:39.813 "sequence_count": 2048, 00:20:39.813 "buf_count": 2048 00:20:39.813 } 00:20:39.813 } 00:20:39.813 ] 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "subsystem": "bdev", 00:20:39.813 "config": [ 00:20:39.813 { 00:20:39.813 "method": "bdev_set_options", 00:20:39.813 "params": { 00:20:39.813 "bdev_io_pool_size": 65535, 00:20:39.813 "bdev_io_cache_size": 256, 00:20:39.813 "bdev_auto_examine": true, 00:20:39.813 "iobuf_small_cache_size": 128, 00:20:39.813 "iobuf_large_cache_size": 16 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_raid_set_options", 00:20:39.813 "params": { 00:20:39.813 "process_window_size_kb": 1024, 00:20:39.813 "process_max_bandwidth_mb_sec": 0 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_iscsi_set_options", 00:20:39.813 "params": { 00:20:39.813 "timeout_sec": 30 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_nvme_set_options", 00:20:39.813 "params": { 00:20:39.813 "action_on_timeout": "none", 00:20:39.813 "timeout_us": 0, 00:20:39.813 "timeout_admin_us": 0, 00:20:39.813 "keep_alive_timeout_ms": 10000, 00:20:39.813 "arbitration_burst": 0, 00:20:39.813 "low_priority_weight": 0, 00:20:39.813 "medium_priority_weight": 0, 00:20:39.813 "high_priority_weight": 0, 00:20:39.813 "nvme_adminq_poll_period_us": 10000, 00:20:39.813 "nvme_ioq_poll_period_us": 0, 00:20:39.813 "io_queue_requests": 512, 00:20:39.813 "delay_cmd_submit": true, 00:20:39.813 "transport_retry_count": 4, 00:20:39.813 "bdev_retry_count": 3, 00:20:39.813 "transport_ack_timeout": 0, 00:20:39.813 "ctrlr_loss_timeout_sec": 0, 00:20:39.813 "reconnect_delay_sec": 0, 00:20:39.813 "fast_io_fail_timeout_sec": 0, 00:20:39.813 "disable_auto_failback": false, 00:20:39.813 "generate_uuids": false, 00:20:39.813 "transport_tos": 0, 00:20:39.813 "nvme_error_stat": false, 00:20:39.813 "rdma_srq_size": 0, 00:20:39.813 "io_path_stat": false, 00:20:39.813 "allow_accel_sequence": false, 00:20:39.813 "rdma_max_cq_size": 0, 00:20:39.813 "rdma_cm_event_timeout_ms": 0, 00:20:39.813 "dhchap_digests": [ 00:20:39.813 "sha256", 00:20:39.813 "sha384", 00:20:39.813 "sha512" 00:20:39.813 ], 00:20:39.813 "dhchap_dhgroups": [ 00:20:39.813 "null", 00:20:39.813 "ffdhe2048", 00:20:39.813 "ffdhe3072", 00:20:39.813 "ffdhe4096", 00:20:39.813 "ffdhe6144", 00:20:39.813 "ffdhe8192" 00:20:39.813 ] 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_nvme_attach_controller", 00:20:39.813 "params": { 00:20:39.813 "name": "nvme0", 00:20:39.813 "trtype": "TCP", 00:20:39.813 "adrfam": "IPv4", 00:20:39.813 "traddr": "10.0.0.2", 00:20:39.813 "trsvcid": "4420", 00:20:39.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.813 "prchk_reftag": false, 00:20:39.813 "prchk_guard": false, 00:20:39.813 "ctrlr_loss_timeout_sec": 0, 00:20:39.813 "reconnect_delay_sec": 0, 00:20:39.813 "fast_io_fail_timeout_sec": 0, 00:20:39.813 "psk": "key0", 00:20:39.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.813 "hdgst": false, 00:20:39.813 "ddgst": false, 00:20:39.813 "multipath": "multipath" 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_nvme_set_hotplug", 00:20:39.813 "params": { 00:20:39.813 "period_us": 100000, 00:20:39.813 "enable": false 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_enable_histogram", 00:20:39.813 "params": { 00:20:39.813 "name": "nvme0n1", 00:20:39.813 "enable": true 00:20:39.813 } 00:20:39.813 }, 00:20:39.813 { 00:20:39.813 "method": "bdev_wait_for_examine" 00:20:39.813 } 00:20:39.813 ] 00:20:39.814 }, 00:20:39.814 { 00:20:39.814 "subsystem": "nbd", 00:20:39.814 "config": [] 00:20:39.814 } 00:20:39.814 ] 00:20:39.814 }' 00:20:39.814 [2024-11-15 11:00:59.164926] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:39.814 [2024-11-15 11:00:59.164979] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417091 ] 00:20:39.814 [2024-11-15 11:00:59.250806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.814 [2024-11-15 11:00:59.280608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.074 [2024-11-15 11:00:59.416495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.645 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.645 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:40.645 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:40.645 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:40.645 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.645 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.905 Running I/O for 1 seconds... 00:20:41.848 5506.00 IOPS, 21.51 MiB/s 00:20:41.848 Latency(us) 00:20:41.848 [2024-11-15T10:01:01.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.848 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.848 Verification LBA range: start 0x0 length 0x2000 00:20:41.848 nvme0n1 : 1.01 5559.49 21.72 0.00 0.00 22871.47 5188.27 34078.72 00:20:41.848 [2024-11-15T10:01:01.375Z] =================================================================================================================== 00:20:41.848 [2024-11-15T10:01:01.375Z] Total : 5559.49 21.72 0.00 0.00 22871.47 5188.27 34078.72 00:20:41.848 { 00:20:41.848 "results": [ 00:20:41.848 { 00:20:41.848 "job": "nvme0n1", 00:20:41.848 "core_mask": "0x2", 00:20:41.848 "workload": "verify", 00:20:41.848 "status": "finished", 00:20:41.848 "verify_range": { 00:20:41.848 "start": 0, 00:20:41.848 "length": 8192 00:20:41.848 }, 00:20:41.848 "queue_depth": 128, 00:20:41.848 "io_size": 4096, 00:20:41.848 "runtime": 1.013402, 00:20:41.848 "iops": 5559.491692339269, 00:20:41.848 "mibps": 21.71676442320027, 00:20:41.848 "io_failed": 0, 00:20:41.848 "io_timeout": 0, 00:20:41.848 "avg_latency_us": 22871.468332741686, 00:20:41.848 "min_latency_us": 5188.266666666666, 00:20:41.848 "max_latency_us": 34078.72 00:20:41.848 } 00:20:41.848 ], 00:20:41.848 "core_count": 1 00:20:41.848 } 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:41.849 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:41.849 nvmf_trace.0 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 417091 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 417091 ']' 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 417091 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 417091 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:42.111 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 417091' 00:20:42.112 killing process with pid 417091 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 417091 00:20:42.112 Received shutdown signal, test time was about 1.000000 seconds 00:20:42.112 00:20:42.112 Latency(us) 00:20:42.112 [2024-11-15T10:01:01.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.112 [2024-11-15T10:01:01.639Z] =================================================================================================================== 00:20:42.112 [2024-11-15T10:01:01.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 417091 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:42.112 rmmod nvme_tcp 00:20:42.112 rmmod nvme_fabrics 00:20:42.112 rmmod nvme_keyring 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 417004 ']' 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 417004 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 417004 ']' 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 417004 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:42.112 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 417004 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 417004' 00:20:42.373 killing process with pid 417004 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 417004 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 417004 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.373 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.JygbgDzP2B /tmp/tmp.c1vVv9EpDv /tmp/tmp.0uhU7HRmUL 00:20:44.920 00:20:44.920 real 1m28.183s 00:20:44.920 user 2m19.461s 00:20:44.920 sys 0m27.263s 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.920 ************************************ 00:20:44.920 END TEST nvmf_tls 00:20:44.920 ************************************ 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.920 ************************************ 00:20:44.920 START TEST nvmf_fips 00:20:44.920 ************************************ 00:20:44.920 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:44.920 * Looking for test storage... 00:20:44.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:44.920 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:44.920 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:44.920 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:44.920 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:44.920 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.920 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:44.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.921 --rc genhtml_branch_coverage=1 00:20:44.921 --rc genhtml_function_coverage=1 00:20:44.921 --rc genhtml_legend=1 00:20:44.921 --rc geninfo_all_blocks=1 00:20:44.921 --rc geninfo_unexecuted_blocks=1 00:20:44.921 00:20:44.921 ' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:44.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.921 --rc genhtml_branch_coverage=1 00:20:44.921 --rc genhtml_function_coverage=1 00:20:44.921 --rc genhtml_legend=1 00:20:44.921 --rc geninfo_all_blocks=1 00:20:44.921 --rc geninfo_unexecuted_blocks=1 00:20:44.921 00:20:44.921 ' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:44.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.921 --rc genhtml_branch_coverage=1 00:20:44.921 --rc genhtml_function_coverage=1 00:20:44.921 --rc genhtml_legend=1 00:20:44.921 --rc geninfo_all_blocks=1 00:20:44.921 --rc geninfo_unexecuted_blocks=1 00:20:44.921 00:20:44.921 ' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:44.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.921 --rc genhtml_branch_coverage=1 00:20:44.921 --rc genhtml_function_coverage=1 00:20:44.921 --rc genhtml_legend=1 00:20:44.921 --rc geninfo_all_blocks=1 00:20:44.921 --rc geninfo_unexecuted_blocks=1 00:20:44.921 00:20:44.921 ' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:44.921 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:44.922 Error setting digest 00:20:44.922 4072228B827F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:44.922 4072228B827F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.922 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:53.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:53.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.067 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:53.068 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:53.068 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:53.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:20:53.068 00:20:53.068 --- 10.0.0.2 ping statistics --- 00:20:53.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.068 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:20:53.068 00:20:53.068 --- 10.0.0.1 ping statistics --- 00:20:53.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.068 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=421815 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 421815 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 421815 ']' 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.068 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 [2024-11-15 11:01:11.987379] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:53.068 [2024-11-15 11:01:11.987452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.068 [2024-11-15 11:01:12.086140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.068 [2024-11-15 11:01:12.136996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.068 [2024-11-15 11:01:12.137044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.068 [2024-11-15 11:01:12.137053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.068 [2024-11-15 11:01:12.137060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.068 [2024-11-15 11:01:12.137066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.068 [2024-11-15 11:01:12.137871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.3jU 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.329 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.3jU 00:20:53.589 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.3jU 00:20:53.589 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.3jU 00:20:53.589 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.589 [2024-11-15 11:01:13.025488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.589 [2024-11-15 11:01:13.041479] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.589 [2024-11-15 11:01:13.041772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.589 malloc0 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=422151 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 422151 /var/tmp/bdevperf.sock 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 422151 ']' 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.589 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.590 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.850 [2024-11-15 11:01:13.184757] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:20:53.850 [2024-11-15 11:01:13.184830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422151 ] 00:20:53.850 [2024-11-15 11:01:13.277705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.850 [2024-11-15 11:01:13.328356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.792 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:54.792 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:54.792 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.3jU 00:20:54.792 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.052 [2024-11-15 11:01:14.367651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.052 TLSTESTn1 00:20:55.052 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:55.052 Running I/O for 10 seconds... 00:20:57.378 4089.00 IOPS, 15.97 MiB/s [2024-11-15T10:01:17.844Z] 4869.00 IOPS, 19.02 MiB/s [2024-11-15T10:01:18.785Z] 4985.67 IOPS, 19.48 MiB/s [2024-11-15T10:01:19.724Z] 5162.00 IOPS, 20.16 MiB/s [2024-11-15T10:01:20.665Z] 5292.80 IOPS, 20.68 MiB/s [2024-11-15T10:01:21.606Z] 5328.33 IOPS, 20.81 MiB/s [2024-11-15T10:01:22.989Z] 5328.00 IOPS, 20.81 MiB/s [2024-11-15T10:01:23.942Z] 5425.62 IOPS, 21.19 MiB/s [2024-11-15T10:01:24.886Z] 5492.22 IOPS, 21.45 MiB/s [2024-11-15T10:01:24.886Z] 5579.20 IOPS, 21.79 MiB/s 00:21:05.359 Latency(us) 00:21:05.359 [2024-11-15T10:01:24.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.359 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.359 Verification LBA range: start 0x0 length 0x2000 00:21:05.359 TLSTESTn1 : 10.01 5585.30 21.82 0.00 0.00 22883.71 4942.51 53521.07 00:21:05.359 [2024-11-15T10:01:24.886Z] =================================================================================================================== 00:21:05.359 [2024-11-15T10:01:24.886Z] Total : 5585.30 21.82 0.00 0.00 22883.71 4942.51 53521.07 00:21:05.359 { 00:21:05.359 "results": [ 00:21:05.359 { 00:21:05.359 "job": "TLSTESTn1", 00:21:05.359 "core_mask": "0x4", 00:21:05.359 "workload": "verify", 00:21:05.359 "status": "finished", 00:21:05.359 "verify_range": { 00:21:05.359 "start": 0, 00:21:05.359 "length": 8192 00:21:05.359 }, 00:21:05.359 "queue_depth": 128, 00:21:05.359 "io_size": 4096, 00:21:05.359 "runtime": 10.011808, 00:21:05.359 "iops": 5585.304872007134, 00:21:05.359 "mibps": 21.817597156277866, 00:21:05.359 "io_failed": 0, 00:21:05.359 "io_timeout": 0, 00:21:05.359 "avg_latency_us": 22883.70816740881, 00:21:05.359 "min_latency_us": 4942.506666666667, 00:21:05.359 "max_latency_us": 53521.066666666666 00:21:05.359 } 00:21:05.359 ], 00:21:05.359 "core_count": 1 00:21:05.359 } 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:05.359 nvmf_trace.0 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 422151 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 422151 ']' 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 422151 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 422151 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:05.359 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 422151' 00:21:05.360 killing process with pid 422151 00:21:05.360 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 422151 00:21:05.360 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.360 00:21:05.360 Latency(us) 00:21:05.360 [2024-11-15T10:01:24.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.360 [2024-11-15T10:01:24.887Z] =================================================================================================================== 00:21:05.360 [2024-11-15T10:01:24.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.360 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 422151 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.620 rmmod nvme_tcp 00:21:05.620 rmmod nvme_fabrics 00:21:05.620 rmmod nvme_keyring 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 421815 ']' 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 421815 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 421815 ']' 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 421815 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.620 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 421815 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 421815' 00:21:05.620 killing process with pid 421815 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 421815 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 421815 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.620 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.3jU 00:21:08.186 00:21:08.186 real 0m23.256s 00:21:08.186 user 0m25.097s 00:21:08.186 sys 0m9.576s 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:08.186 ************************************ 00:21:08.186 END TEST nvmf_fips 00:21:08.186 ************************************ 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.186 ************************************ 00:21:08.186 START TEST nvmf_control_msg_list 00:21:08.186 ************************************ 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:08.186 * Looking for test storage... 00:21:08.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.186 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:08.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.187 --rc genhtml_branch_coverage=1 00:21:08.187 --rc genhtml_function_coverage=1 00:21:08.187 --rc genhtml_legend=1 00:21:08.187 --rc geninfo_all_blocks=1 00:21:08.187 --rc geninfo_unexecuted_blocks=1 00:21:08.187 00:21:08.187 ' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:08.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.187 --rc genhtml_branch_coverage=1 00:21:08.187 --rc genhtml_function_coverage=1 00:21:08.187 --rc genhtml_legend=1 00:21:08.187 --rc geninfo_all_blocks=1 00:21:08.187 --rc geninfo_unexecuted_blocks=1 00:21:08.187 00:21:08.187 ' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:08.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.187 --rc genhtml_branch_coverage=1 00:21:08.187 --rc genhtml_function_coverage=1 00:21:08.187 --rc genhtml_legend=1 00:21:08.187 --rc geninfo_all_blocks=1 00:21:08.187 --rc geninfo_unexecuted_blocks=1 00:21:08.187 00:21:08.187 ' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:08.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.187 --rc genhtml_branch_coverage=1 00:21:08.187 --rc genhtml_function_coverage=1 00:21:08.187 --rc genhtml_legend=1 00:21:08.187 --rc geninfo_all_blocks=1 00:21:08.187 --rc geninfo_unexecuted_blocks=1 00:21:08.187 00:21:08.187 ' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.187 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.188 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.188 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.188 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.188 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:16.435 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:16.435 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:16.435 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:16.435 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.435 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.436 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:21:16.436 00:21:16.436 --- 10.0.0.2 ping statistics --- 00:21:16.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.436 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:21:16.436 00:21:16.436 --- 10.0.0.1 ping statistics --- 00:21:16.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.436 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=428556 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 428556 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 428556 ']' 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.436 [2024-11-15 11:01:35.129225] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:21:16.436 [2024-11-15 11:01:35.129290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.436 [2024-11-15 11:01:35.228183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.436 [2024-11-15 11:01:35.279415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.436 [2024-11-15 11:01:35.279467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.436 [2024-11-15 11:01:35.279475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.436 [2024-11-15 11:01:35.279483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.436 [2024-11-15 11:01:35.279489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.436 [2024-11-15 11:01:35.280313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.436 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 [2024-11-15 11:01:35.991644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.697 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 Malloc0 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 [2024-11-15 11:01:36.046062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=428856 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=428857 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=428858 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 428856 00:21:16.697 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.697 [2024-11-15 11:01:36.146641] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:16.697 [2024-11-15 11:01:36.156901] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:16.697 [2024-11-15 11:01:36.157201] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:18.082 Initializing NVMe Controllers 00:21:18.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:18.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:18.082 Initialization complete. Launching workers. 00:21:18.082 ======================================================== 00:21:18.082 Latency(us) 00:21:18.082 Device Information : IOPS MiB/s Average min max 00:21:18.082 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1474.00 5.76 678.56 300.15 974.40 00:21:18.082 ======================================================== 00:21:18.082 Total : 1474.00 5.76 678.56 300.15 974.40 00:21:18.082 00:21:18.082 Initializing NVMe Controllers 00:21:18.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:18.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:18.082 Initialization complete. Launching workers. 00:21:18.082 ======================================================== 00:21:18.082 Latency(us) 00:21:18.082 Device Information : IOPS MiB/s Average min max 00:21:18.082 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1515.98 5.92 659.74 210.76 936.27 00:21:18.082 ======================================================== 00:21:18.082 Total : 1515.98 5.92 659.74 210.76 936.27 00:21:18.082 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 428857 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 428858 00:21:18.082 Initializing NVMe Controllers 00:21:18.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:18.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:18.082 Initialization complete. Launching workers. 00:21:18.082 ======================================================== 00:21:18.082 Latency(us) 00:21:18.082 Device Information : IOPS MiB/s Average min max 00:21:18.082 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40916.49 40818.77 41219.36 00:21:18.082 ======================================================== 00:21:18.082 Total : 25.00 0.10 40916.49 40818.77 41219.36 00:21:18.082 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.082 rmmod nvme_tcp 00:21:18.082 rmmod nvme_fabrics 00:21:18.082 rmmod nvme_keyring 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 428556 ']' 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 428556 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 428556 ']' 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 428556 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 428556 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 428556' 00:21:18.082 killing process with pid 428556 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 428556 00:21:18.082 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 428556 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.342 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.885 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.885 00:21:20.885 real 0m12.519s 00:21:20.885 user 0m8.123s 00:21:20.885 sys 0m6.679s 00:21:20.885 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:20.885 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:20.885 ************************************ 00:21:20.885 END TEST nvmf_control_msg_list 00:21:20.885 ************************************ 00:21:20.885 11:01:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:20.885 11:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:20.886 11:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:20.886 11:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.886 ************************************ 00:21:20.886 START TEST nvmf_wait_for_buf 00:21:20.886 ************************************ 00:21:20.886 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:20.886 * Looking for test storage... 00:21:20.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.886 --rc genhtml_branch_coverage=1 00:21:20.886 --rc genhtml_function_coverage=1 00:21:20.886 --rc genhtml_legend=1 00:21:20.886 --rc geninfo_all_blocks=1 00:21:20.886 --rc geninfo_unexecuted_blocks=1 00:21:20.886 00:21:20.886 ' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.886 --rc genhtml_branch_coverage=1 00:21:20.886 --rc genhtml_function_coverage=1 00:21:20.886 --rc genhtml_legend=1 00:21:20.886 --rc geninfo_all_blocks=1 00:21:20.886 --rc geninfo_unexecuted_blocks=1 00:21:20.886 00:21:20.886 ' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.886 --rc genhtml_branch_coverage=1 00:21:20.886 --rc genhtml_function_coverage=1 00:21:20.886 --rc genhtml_legend=1 00:21:20.886 --rc geninfo_all_blocks=1 00:21:20.886 --rc geninfo_unexecuted_blocks=1 00:21:20.886 00:21:20.886 ' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.886 --rc genhtml_branch_coverage=1 00:21:20.886 --rc genhtml_function_coverage=1 00:21:20.886 --rc genhtml_legend=1 00:21:20.886 --rc geninfo_all_blocks=1 00:21:20.886 --rc geninfo_unexecuted_blocks=1 00:21:20.886 00:21:20.886 ' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.886 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.887 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:29.025 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:29.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:29.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.025 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:29.026 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:29.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:21:29.026 00:21:29.026 --- 10.0.0.2 ping statistics --- 00:21:29.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.026 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:21:29.026 00:21:29.026 --- 10.0.0.1 ping statistics --- 00:21:29.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.026 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=433222 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 433222 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 433222 ']' 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:29.026 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.026 [2024-11-15 11:01:47.716396] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:21:29.026 [2024-11-15 11:01:47.716466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.026 [2024-11-15 11:01:47.818097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.026 [2024-11-15 11:01:47.869700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.026 [2024-11-15 11:01:47.869756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.026 [2024-11-15 11:01:47.869764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.026 [2024-11-15 11:01:47.869772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.026 [2024-11-15 11:01:47.869778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.026 [2024-11-15 11:01:47.870622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.026 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:29.026 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:29.026 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:29.026 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.026 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 Malloc0 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 [2024-11-15 11:01:48.696680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 [2024-11-15 11:01:48.733022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.287 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.548 [2024-11-15 11:01:48.837676] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:30.936 Initializing NVMe Controllers 00:21:30.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:30.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:30.936 Initialization complete. Launching workers. 00:21:30.936 ======================================================== 00:21:30.936 Latency(us) 00:21:30.936 Device Information : IOPS MiB/s Average min max 00:21:30.936 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.81 8007.87 63858.58 00:21:30.936 ======================================================== 00:21:30.936 Total : 129.00 16.12 32294.81 8007.87 63858.58 00:21:30.936 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.936 rmmod nvme_tcp 00:21:30.936 rmmod nvme_fabrics 00:21:30.936 rmmod nvme_keyring 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 433222 ']' 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 433222 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 433222 ']' 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 433222 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:30.936 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 433222 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 433222' 00:21:31.196 killing process with pid 433222 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 433222 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 433222 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.196 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.738 00:21:33.738 real 0m12.865s 00:21:33.738 user 0m5.279s 00:21:33.738 sys 0m6.180s 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:33.738 ************************************ 00:21:33.738 END TEST nvmf_wait_for_buf 00:21:33.738 ************************************ 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.738 11:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:41.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:41.874 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:41.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:41.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:41.874 ************************************ 00:21:41.874 START TEST nvmf_perf_adq 00:21:41.874 ************************************ 00:21:41.874 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:41.874 * Looking for test storage... 00:21:41.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.874 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:41.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.875 --rc genhtml_branch_coverage=1 00:21:41.875 --rc genhtml_function_coverage=1 00:21:41.875 --rc genhtml_legend=1 00:21:41.875 --rc geninfo_all_blocks=1 00:21:41.875 --rc geninfo_unexecuted_blocks=1 00:21:41.875 00:21:41.875 ' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:41.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.875 --rc genhtml_branch_coverage=1 00:21:41.875 --rc genhtml_function_coverage=1 00:21:41.875 --rc genhtml_legend=1 00:21:41.875 --rc geninfo_all_blocks=1 00:21:41.875 --rc geninfo_unexecuted_blocks=1 00:21:41.875 00:21:41.875 ' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:41.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.875 --rc genhtml_branch_coverage=1 00:21:41.875 --rc genhtml_function_coverage=1 00:21:41.875 --rc genhtml_legend=1 00:21:41.875 --rc geninfo_all_blocks=1 00:21:41.875 --rc geninfo_unexecuted_blocks=1 00:21:41.875 00:21:41.875 ' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:41.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.875 --rc genhtml_branch_coverage=1 00:21:41.875 --rc genhtml_function_coverage=1 00:21:41.875 --rc genhtml_legend=1 00:21:41.875 --rc geninfo_all_blocks=1 00:21:41.875 --rc geninfo_unexecuted_blocks=1 00:21:41.875 00:21:41.875 ' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.875 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.489 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.489 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.489 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.489 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.489 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.489 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:48.490 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:48.490 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:48.490 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:48.490 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:48.490 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:49.874 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:51.788 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:57.079 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:57.079 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:57.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:57.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.079 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:21:57.080 00:21:57.080 --- 10.0.0.2 ping statistics --- 00:21:57.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.080 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:57.080 00:21:57.080 --- 10.0.0.1 ping statistics --- 00:21:57.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.080 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.080 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=443509 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 443509 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 443509 ']' 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:57.341 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.341 [2024-11-15 11:02:16.692902] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:21:57.341 [2024-11-15 11:02:16.692969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.341 [2024-11-15 11:02:16.794960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.341 [2024-11-15 11:02:16.851551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.341 [2024-11-15 11:02:16.851629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.341 [2024-11-15 11:02:16.851638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.341 [2024-11-15 11:02:16.851646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.341 [2024-11-15 11:02:16.851652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.341 [2024-11-15 11:02:16.853702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.341 [2024-11-15 11:02:16.853836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.341 [2024-11-15 11:02:16.853996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.341 [2024-11-15 11:02:16.853997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 [2024-11-15 11:02:17.719572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.282 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.283 Malloc1 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.283 [2024-11-15 11:02:17.805737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.283 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.543 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=443803 00:21:58.543 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:58.543 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:00.456 "tick_rate": 2400000000, 00:22:00.456 "poll_groups": [ 00:22:00.456 { 00:22:00.456 "name": "nvmf_tgt_poll_group_000", 00:22:00.456 "admin_qpairs": 1, 00:22:00.456 "io_qpairs": 1, 00:22:00.456 "current_admin_qpairs": 1, 00:22:00.456 "current_io_qpairs": 1, 00:22:00.456 "pending_bdev_io": 0, 00:22:00.456 "completed_nvme_io": 17420, 00:22:00.456 "transports": [ 00:22:00.456 { 00:22:00.456 "trtype": "TCP" 00:22:00.456 } 00:22:00.456 ] 00:22:00.456 }, 00:22:00.456 { 00:22:00.456 "name": "nvmf_tgt_poll_group_001", 00:22:00.456 "admin_qpairs": 0, 00:22:00.456 "io_qpairs": 1, 00:22:00.456 "current_admin_qpairs": 0, 00:22:00.456 "current_io_qpairs": 1, 00:22:00.456 "pending_bdev_io": 0, 00:22:00.456 "completed_nvme_io": 19494, 00:22:00.456 "transports": [ 00:22:00.456 { 00:22:00.456 "trtype": "TCP" 00:22:00.456 } 00:22:00.456 ] 00:22:00.456 }, 00:22:00.456 { 00:22:00.456 "name": "nvmf_tgt_poll_group_002", 00:22:00.456 "admin_qpairs": 0, 00:22:00.456 "io_qpairs": 1, 00:22:00.456 "current_admin_qpairs": 0, 00:22:00.456 "current_io_qpairs": 1, 00:22:00.456 "pending_bdev_io": 0, 00:22:00.456 "completed_nvme_io": 20836, 00:22:00.456 "transports": [ 00:22:00.456 { 00:22:00.456 "trtype": "TCP" 00:22:00.456 } 00:22:00.456 ] 00:22:00.456 }, 00:22:00.456 { 00:22:00.456 "name": "nvmf_tgt_poll_group_003", 00:22:00.456 "admin_qpairs": 0, 00:22:00.456 "io_qpairs": 1, 00:22:00.456 "current_admin_qpairs": 0, 00:22:00.456 "current_io_qpairs": 1, 00:22:00.456 "pending_bdev_io": 0, 00:22:00.456 "completed_nvme_io": 17945, 00:22:00.456 "transports": [ 00:22:00.456 { 00:22:00.456 "trtype": "TCP" 00:22:00.456 } 00:22:00.456 ] 00:22:00.456 } 00:22:00.456 ] 00:22:00.456 }' 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:00.456 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 443803 00:22:08.592 Initializing NVMe Controllers 00:22:08.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:08.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:08.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:08.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:08.592 Initialization complete. Launching workers. 00:22:08.592 ======================================================== 00:22:08.592 Latency(us) 00:22:08.592 Device Information : IOPS MiB/s Average min max 00:22:08.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12782.20 49.93 5007.00 1357.65 12490.46 00:22:08.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13378.20 52.26 4784.13 1125.64 13040.33 00:22:08.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13815.90 53.97 4631.85 1174.67 13089.53 00:22:08.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12671.40 49.50 5050.44 1102.35 12856.47 00:22:08.592 ======================================================== 00:22:08.592 Total : 52647.69 205.66 4862.37 1102.35 13089.53 00:22:08.592 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.592 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.592 rmmod nvme_tcp 00:22:08.592 rmmod nvme_fabrics 00:22:08.592 rmmod nvme_keyring 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 443509 ']' 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 443509 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 443509 ']' 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 443509 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 443509 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 443509' 00:22:08.592 killing process with pid 443509 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 443509 00:22:08.592 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 443509 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.852 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.853 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.853 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.853 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.853 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.395 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.395 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:11.395 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:11.395 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:12.776 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:14.689 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:20.188 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:20.188 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:20.188 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:20.188 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.188 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:20.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:22:20.189 00:22:20.189 --- 10.0.0.2 ping statistics --- 00:22:20.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.189 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:22:20.189 00:22:20.189 --- 10.0.0.1 ping statistics --- 00:22:20.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.189 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:20.189 net.core.busy_poll = 1 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:20.189 net.core.busy_read = 1 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=448427 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 448427 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 448427 ']' 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:20.189 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.470 [2024-11-15 11:02:39.741290] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:22:20.470 [2024-11-15 11:02:39.741360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.470 [2024-11-15 11:02:39.841680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.471 [2024-11-15 11:02:39.895216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.471 [2024-11-15 11:02:39.895269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.471 [2024-11-15 11:02:39.895278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.471 [2024-11-15 11:02:39.895293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.471 [2024-11-15 11:02:39.895299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.471 [2024-11-15 11:02:39.897416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.471 [2024-11-15 11:02:39.897628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.471 [2024-11-15 11:02:39.897694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.471 [2024-11-15 11:02:39.897712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.042 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:21.043 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:21.043 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.043 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.043 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 [2024-11-15 11:02:40.763363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.303 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 Malloc1 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.304 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.564 [2024-11-15 11:02:40.836289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.564 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.564 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=448630 00:22:21.564 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:21.564 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:23.471 "tick_rate": 2400000000, 00:22:23.471 "poll_groups": [ 00:22:23.471 { 00:22:23.471 "name": "nvmf_tgt_poll_group_000", 00:22:23.471 "admin_qpairs": 1, 00:22:23.471 "io_qpairs": 2, 00:22:23.471 "current_admin_qpairs": 1, 00:22:23.471 "current_io_qpairs": 2, 00:22:23.471 "pending_bdev_io": 0, 00:22:23.471 "completed_nvme_io": 27996, 00:22:23.471 "transports": [ 00:22:23.471 { 00:22:23.471 "trtype": "TCP" 00:22:23.471 } 00:22:23.471 ] 00:22:23.471 }, 00:22:23.471 { 00:22:23.471 "name": "nvmf_tgt_poll_group_001", 00:22:23.471 "admin_qpairs": 0, 00:22:23.471 "io_qpairs": 2, 00:22:23.471 "current_admin_qpairs": 0, 00:22:23.471 "current_io_qpairs": 2, 00:22:23.471 "pending_bdev_io": 0, 00:22:23.471 "completed_nvme_io": 29108, 00:22:23.471 "transports": [ 00:22:23.471 { 00:22:23.471 "trtype": "TCP" 00:22:23.471 } 00:22:23.471 ] 00:22:23.471 }, 00:22:23.471 { 00:22:23.471 "name": "nvmf_tgt_poll_group_002", 00:22:23.471 "admin_qpairs": 0, 00:22:23.471 "io_qpairs": 0, 00:22:23.471 "current_admin_qpairs": 0, 00:22:23.471 "current_io_qpairs": 0, 00:22:23.471 "pending_bdev_io": 0, 00:22:23.471 "completed_nvme_io": 0, 00:22:23.471 "transports": [ 00:22:23.471 { 00:22:23.471 "trtype": "TCP" 00:22:23.471 } 00:22:23.471 ] 00:22:23.471 }, 00:22:23.471 { 00:22:23.471 "name": "nvmf_tgt_poll_group_003", 00:22:23.471 "admin_qpairs": 0, 00:22:23.471 "io_qpairs": 0, 00:22:23.471 "current_admin_qpairs": 0, 00:22:23.471 "current_io_qpairs": 0, 00:22:23.471 "pending_bdev_io": 0, 00:22:23.471 "completed_nvme_io": 0, 00:22:23.471 "transports": [ 00:22:23.471 { 00:22:23.471 "trtype": "TCP" 00:22:23.471 } 00:22:23.471 ] 00:22:23.471 } 00:22:23.471 ] 00:22:23.471 }' 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:23.471 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 448630 00:22:31.596 Initializing NVMe Controllers 00:22:31.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:31.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:31.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:31.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:31.596 Initialization complete. Launching workers. 00:22:31.596 ======================================================== 00:22:31.596 Latency(us) 00:22:31.596 Device Information : IOPS MiB/s Average min max 00:22:31.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10480.60 40.94 6108.15 967.33 52539.22 00:22:31.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10078.10 39.37 6359.95 925.51 54710.49 00:22:31.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8368.80 32.69 7647.10 1073.18 53610.17 00:22:31.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8692.60 33.96 7364.17 1124.23 53090.25 00:22:31.597 ======================================================== 00:22:31.597 Total : 37620.10 146.95 6808.17 925.51 54710.49 00:22:31.597 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.597 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.597 rmmod nvme_tcp 00:22:31.597 rmmod nvme_fabrics 00:22:31.597 rmmod nvme_keyring 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 448427 ']' 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 448427 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 448427 ']' 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 448427 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.597 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 448427 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 448427' 00:22:31.856 killing process with pid 448427 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 448427 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 448427 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.856 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:34.396 00:22:34.396 real 0m53.346s 00:22:34.396 user 2m49.689s 00:22:34.396 sys 0m11.651s 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.396 ************************************ 00:22:34.396 END TEST nvmf_perf_adq 00:22:34.396 ************************************ 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.396 ************************************ 00:22:34.396 START TEST nvmf_shutdown 00:22:34.396 ************************************ 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:34.396 * Looking for test storage... 00:22:34.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.396 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:34.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.397 --rc genhtml_branch_coverage=1 00:22:34.397 --rc genhtml_function_coverage=1 00:22:34.397 --rc genhtml_legend=1 00:22:34.397 --rc geninfo_all_blocks=1 00:22:34.397 --rc geninfo_unexecuted_blocks=1 00:22:34.397 00:22:34.397 ' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:34.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.397 --rc genhtml_branch_coverage=1 00:22:34.397 --rc genhtml_function_coverage=1 00:22:34.397 --rc genhtml_legend=1 00:22:34.397 --rc geninfo_all_blocks=1 00:22:34.397 --rc geninfo_unexecuted_blocks=1 00:22:34.397 00:22:34.397 ' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:34.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.397 --rc genhtml_branch_coverage=1 00:22:34.397 --rc genhtml_function_coverage=1 00:22:34.397 --rc genhtml_legend=1 00:22:34.397 --rc geninfo_all_blocks=1 00:22:34.397 --rc geninfo_unexecuted_blocks=1 00:22:34.397 00:22:34.397 ' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:34.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.397 --rc genhtml_branch_coverage=1 00:22:34.397 --rc genhtml_function_coverage=1 00:22:34.397 --rc genhtml_legend=1 00:22:34.397 --rc geninfo_all_blocks=1 00:22:34.397 --rc geninfo_unexecuted_blocks=1 00:22:34.397 00:22:34.397 ' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.397 ************************************ 00:22:34.397 START TEST nvmf_shutdown_tc1 00:22:34.397 ************************************ 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.397 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.533 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.533 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.533 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.533 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.533 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.534 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:22:42.534 00:22:42.534 --- 10.0.0.2 ping statistics --- 00:22:42.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.534 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:22:42.534 00:22:42.534 --- 10.0.0.1 ping statistics --- 00:22:42.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.534 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=455154 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 455154 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 455154 ']' 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.534 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.534 [2024-11-15 11:03:01.379671] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:22:42.534 [2024-11-15 11:03:01.379738] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.534 [2024-11-15 11:03:01.481542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.534 [2024-11-15 11:03:01.534177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.534 [2024-11-15 11:03:01.534234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.534 [2024-11-15 11:03:01.534243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.534 [2024-11-15 11:03:01.534251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.534 [2024-11-15 11:03:01.534257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.534 [2024-11-15 11:03:01.536332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.534 [2024-11-15 11:03:01.536494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.534 [2024-11-15 11:03:01.536638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.534 [2024-11-15 11:03:01.536639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.795 [2024-11-15 11:03:02.255026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:42.795 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.796 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:43.056 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:43.056 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.056 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.056 Malloc1 00:22:43.056 [2024-11-15 11:03:02.392961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.056 Malloc2 00:22:43.056 Malloc3 00:22:43.056 Malloc4 00:22:43.056 Malloc5 00:22:43.317 Malloc6 00:22:43.317 Malloc7 00:22:43.317 Malloc8 00:22:43.317 Malloc9 00:22:43.317 Malloc10 00:22:43.317 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.317 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:43.317 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.317 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=455498 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 455498 /var/tmp/bdevperf.sock 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 455498 ']' 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.580 { 00:22:43.580 "params": { 00:22:43.580 "name": "Nvme$subsystem", 00:22:43.580 "trtype": "$TEST_TRANSPORT", 00:22:43.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.580 "adrfam": "ipv4", 00:22:43.580 "trsvcid": "$NVMF_PORT", 00:22:43.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.580 "hdgst": ${hdgst:-false}, 00:22:43.580 "ddgst": ${ddgst:-false} 00:22:43.580 }, 00:22:43.580 "method": "bdev_nvme_attach_controller" 00:22:43.580 } 00:22:43.580 EOF 00:22:43.580 )") 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.580 { 00:22:43.580 "params": { 00:22:43.580 "name": "Nvme$subsystem", 00:22:43.580 "trtype": "$TEST_TRANSPORT", 00:22:43.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.580 "adrfam": "ipv4", 00:22:43.580 "trsvcid": "$NVMF_PORT", 00:22:43.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.580 "hdgst": ${hdgst:-false}, 00:22:43.580 "ddgst": ${ddgst:-false} 00:22:43.580 }, 00:22:43.580 "method": "bdev_nvme_attach_controller" 00:22:43.580 } 00:22:43.580 EOF 00:22:43.580 )") 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.580 { 00:22:43.580 "params": { 00:22:43.580 "name": "Nvme$subsystem", 00:22:43.580 "trtype": "$TEST_TRANSPORT", 00:22:43.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.580 "adrfam": "ipv4", 00:22:43.580 "trsvcid": "$NVMF_PORT", 00:22:43.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.580 "hdgst": ${hdgst:-false}, 00:22:43.580 "ddgst": ${ddgst:-false} 00:22:43.580 }, 00:22:43.580 "method": "bdev_nvme_attach_controller" 00:22:43.580 } 00:22:43.580 EOF 00:22:43.580 )") 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.580 { 00:22:43.580 "params": { 00:22:43.580 "name": "Nvme$subsystem", 00:22:43.580 "trtype": "$TEST_TRANSPORT", 00:22:43.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.580 "adrfam": "ipv4", 00:22:43.580 "trsvcid": "$NVMF_PORT", 00:22:43.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.580 "hdgst": ${hdgst:-false}, 00:22:43.580 "ddgst": ${ddgst:-false} 00:22:43.580 }, 00:22:43.580 "method": "bdev_nvme_attach_controller" 00:22:43.580 } 00:22:43.580 EOF 00:22:43.580 )") 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.580 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.580 { 00:22:43.580 "params": { 00:22:43.580 "name": "Nvme$subsystem", 00:22:43.580 "trtype": "$TEST_TRANSPORT", 00:22:43.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.580 "adrfam": "ipv4", 00:22:43.580 "trsvcid": "$NVMF_PORT", 00:22:43.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.580 "hdgst": ${hdgst:-false}, 00:22:43.580 "ddgst": ${ddgst:-false} 00:22:43.580 }, 00:22:43.580 "method": "bdev_nvme_attach_controller" 00:22:43.580 } 00:22:43.580 EOF 00:22:43.580 )") 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.581 { 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme$subsystem", 00:22:43.581 "trtype": "$TEST_TRANSPORT", 00:22:43.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "$NVMF_PORT", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.581 "hdgst": ${hdgst:-false}, 00:22:43.581 "ddgst": ${ddgst:-false} 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 } 00:22:43.581 EOF 00:22:43.581 )") 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.581 [2024-11-15 11:03:02.909671] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:22:43.581 [2024-11-15 11:03:02.909749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.581 { 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme$subsystem", 00:22:43.581 "trtype": "$TEST_TRANSPORT", 00:22:43.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "$NVMF_PORT", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.581 "hdgst": ${hdgst:-false}, 00:22:43.581 "ddgst": ${ddgst:-false} 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 } 00:22:43.581 EOF 00:22:43.581 )") 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.581 { 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme$subsystem", 00:22:43.581 "trtype": "$TEST_TRANSPORT", 00:22:43.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "$NVMF_PORT", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.581 "hdgst": ${hdgst:-false}, 00:22:43.581 "ddgst": ${ddgst:-false} 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 } 00:22:43.581 EOF 00:22:43.581 )") 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.581 { 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme$subsystem", 00:22:43.581 "trtype": "$TEST_TRANSPORT", 00:22:43.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "$NVMF_PORT", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.581 "hdgst": ${hdgst:-false}, 00:22:43.581 "ddgst": ${ddgst:-false} 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 } 00:22:43.581 EOF 00:22:43.581 )") 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.581 { 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme$subsystem", 00:22:43.581 "trtype": "$TEST_TRANSPORT", 00:22:43.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "$NVMF_PORT", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.581 "hdgst": ${hdgst:-false}, 00:22:43.581 "ddgst": ${ddgst:-false} 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 } 00:22:43.581 EOF 00:22:43.581 )") 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:43.581 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme1", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme2", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme3", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme4", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme5", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme6", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme7", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme8", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme9", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.581 "trsvcid": "4420", 00:22:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.581 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.581 "hdgst": false, 00:22:43.581 "ddgst": false 00:22:43.581 }, 00:22:43.581 "method": "bdev_nvme_attach_controller" 00:22:43.581 },{ 00:22:43.581 "params": { 00:22:43.581 "name": "Nvme10", 00:22:43.581 "trtype": "tcp", 00:22:43.581 "traddr": "10.0.0.2", 00:22:43.581 "adrfam": "ipv4", 00:22:43.582 "trsvcid": "4420", 00:22:43.582 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.582 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.582 "hdgst": false, 00:22:43.582 "ddgst": false 00:22:43.582 }, 00:22:43.582 "method": "bdev_nvme_attach_controller" 00:22:43.582 }' 00:22:43.582 [2024-11-15 11:03:03.007755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.582 [2024-11-15 11:03:03.062302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 455498 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:44.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 455498 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:44.965 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:45.904 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 455154 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 [2024-11-15 11:03:05.371092] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:22:45.905 [2024-11-15 11:03:05.371148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455942 ] 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.905 "ddgst": ${ddgst:-false} 00:22:45.905 }, 00:22:45.905 "method": "bdev_nvme_attach_controller" 00:22:45.905 } 00:22:45.905 EOF 00:22:45.905 )") 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.905 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.905 { 00:22:45.905 "params": { 00:22:45.905 "name": "Nvme$subsystem", 00:22:45.905 "trtype": "$TEST_TRANSPORT", 00:22:45.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.905 "adrfam": "ipv4", 00:22:45.905 "trsvcid": "$NVMF_PORT", 00:22:45.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.905 "hdgst": ${hdgst:-false}, 00:22:45.906 "ddgst": ${ddgst:-false} 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 } 00:22:45.906 EOF 00:22:45.906 )") 00:22:45.906 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.906 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:45.906 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:45.906 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme1", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme2", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme3", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme4", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme5", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme6", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme7", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme8", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme9", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 },{ 00:22:45.906 "params": { 00:22:45.906 "name": "Nvme10", 00:22:45.906 "trtype": "tcp", 00:22:45.906 "traddr": "10.0.0.2", 00:22:45.906 "adrfam": "ipv4", 00:22:45.906 "trsvcid": "4420", 00:22:45.906 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.906 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.906 "hdgst": false, 00:22:45.906 "ddgst": false 00:22:45.906 }, 00:22:45.906 "method": "bdev_nvme_attach_controller" 00:22:45.906 }' 00:22:46.167 [2024-11-15 11:03:05.460987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.167 [2024-11-15 11:03:05.496941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.551 Running I/O for 1 seconds... 00:22:48.493 1923.00 IOPS, 120.19 MiB/s 00:22:48.493 Latency(us) 00:22:48.493 [2024-11-15T10:03:08.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme1n1 : 1.13 226.19 14.14 0.00 0.00 279919.57 18786.99 249910.61 00:22:48.493 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme2n1 : 1.14 225.39 14.09 0.00 0.00 276289.71 19879.25 251658.24 00:22:48.493 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme3n1 : 1.05 243.54 15.22 0.00 0.00 250441.60 14417.92 255153.49 00:22:48.493 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme4n1 : 1.12 228.66 14.29 0.00 0.00 262769.92 21845.33 246415.36 00:22:48.493 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme5n1 : 1.13 233.14 14.57 0.00 0.00 252237.23 5789.01 246415.36 00:22:48.493 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme6n1 : 1.12 240.89 15.06 0.00 0.00 238013.27 8847.36 235929.60 00:22:48.493 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme7n1 : 1.19 269.12 16.82 0.00 0.00 212561.75 12342.61 246415.36 00:22:48.493 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme8n1 : 1.19 268.41 16.78 0.00 0.00 209376.94 20971.52 241172.48 00:22:48.493 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme9n1 : 1.20 266.53 16.66 0.00 0.00 207423.32 8956.59 255153.49 00:22:48.493 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.493 Verification LBA range: start 0x0 length 0x400 00:22:48.493 Nvme10n1 : 1.21 264.98 16.56 0.00 0.00 204962.99 12451.84 272629.76 00:22:48.493 [2024-11-15T10:03:08.020Z] =================================================================================================================== 00:22:48.493 [2024-11-15T10:03:08.020Z] Total : 2466.85 154.18 0.00 0.00 236644.01 5789.01 272629.76 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.754 rmmod nvme_tcp 00:22:48.754 rmmod nvme_fabrics 00:22:48.754 rmmod nvme_keyring 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 455154 ']' 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 455154 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 455154 ']' 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 455154 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:48.754 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 455154 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 455154' 00:22:49.014 killing process with pid 455154 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 455154 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 455154 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.014 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.560 00:22:51.560 real 0m16.899s 00:22:51.560 user 0m33.673s 00:22:51.560 sys 0m7.173s 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.560 ************************************ 00:22:51.560 END TEST nvmf_shutdown_tc1 00:22:51.560 ************************************ 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.560 ************************************ 00:22:51.560 START TEST nvmf_shutdown_tc2 00:22:51.560 ************************************ 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:51.560 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:51.560 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.560 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:51.561 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:51.561 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:22:51.561 00:22:51.561 --- 10.0.0.2 ping statistics --- 00:22:51.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.561 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:22:51.561 00:22:51.561 --- 10.0.0.1 ping statistics --- 00:22:51.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.561 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:22:51.561 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=457513 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 457513 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 457513 ']' 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:51.561 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.821 [2024-11-15 11:03:11.112483] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:22:51.821 [2024-11-15 11:03:11.112548] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.821 [2024-11-15 11:03:11.207595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.821 [2024-11-15 11:03:11.241956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.821 [2024-11-15 11:03:11.241985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.821 [2024-11-15 11:03:11.241991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.821 [2024-11-15 11:03:11.241996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.821 [2024-11-15 11:03:11.242000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.821 [2024-11-15 11:03:11.243624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.821 [2024-11-15 11:03:11.243779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.821 [2024-11-15 11:03:11.243902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.821 [2024-11-15 11:03:11.243904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.390 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.390 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:52.390 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.390 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.390 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.649 [2024-11-15 11:03:11.963565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.649 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.649 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.650 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.650 Malloc1 00:22:52.650 [2024-11-15 11:03:12.073204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.650 Malloc2 00:22:52.650 Malloc3 00:22:52.650 Malloc4 00:22:52.909 Malloc5 00:22:52.909 Malloc6 00:22:52.909 Malloc7 00:22:52.909 Malloc8 00:22:52.909 Malloc9 00:22:52.909 Malloc10 00:22:52.909 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.909 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:52.909 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.909 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=457900 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 457900 /var/tmp/bdevperf.sock 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 457900 ']' 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.170 } 00:22:53.170 EOF 00:22:53.170 )") 00:22:53.170 [2024-11-15 11:03:12.518260] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:22:53.170 [2024-11-15 11:03:12.518317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457900 ] 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.170 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.170 { 00:22:53.170 "params": { 00:22:53.170 "name": "Nvme$subsystem", 00:22:53.170 "trtype": "$TEST_TRANSPORT", 00:22:53.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.170 "adrfam": "ipv4", 00:22:53.170 "trsvcid": "$NVMF_PORT", 00:22:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.170 "hdgst": ${hdgst:-false}, 00:22:53.170 "ddgst": ${ddgst:-false} 00:22:53.170 }, 00:22:53.170 "method": "bdev_nvme_attach_controller" 00:22:53.171 } 00:22:53.171 EOF 00:22:53.171 )") 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.171 { 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme$subsystem", 00:22:53.171 "trtype": "$TEST_TRANSPORT", 00:22:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "$NVMF_PORT", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.171 "hdgst": ${hdgst:-false}, 00:22:53.171 "ddgst": ${ddgst:-false} 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 } 00:22:53.171 EOF 00:22:53.171 )") 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.171 { 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme$subsystem", 00:22:53.171 "trtype": "$TEST_TRANSPORT", 00:22:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "$NVMF_PORT", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.171 "hdgst": ${hdgst:-false}, 00:22:53.171 "ddgst": ${ddgst:-false} 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 } 00:22:53.171 EOF 00:22:53.171 )") 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:53.171 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme1", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme2", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme3", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme4", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme5", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme6", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme7", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme8", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme9", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 },{ 00:22:53.171 "params": { 00:22:53.171 "name": "Nvme10", 00:22:53.171 "trtype": "tcp", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "adrfam": "ipv4", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:53.171 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:53.171 "hdgst": false, 00:22:53.171 "ddgst": false 00:22:53.171 }, 00:22:53.171 "method": "bdev_nvme_attach_controller" 00:22:53.171 }' 00:22:53.171 [2024-11-15 11:03:12.625093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.171 [2024-11-15 11:03:12.661823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.081 Running I/O for 10 seconds... 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:55.081 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:55.342 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:55.602 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 457900 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 457900 ']' 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 457900 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:55.603 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 457900 00:22:55.863 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:55.863 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:55.863 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 457900' 00:22:55.863 killing process with pid 457900 00:22:55.863 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 457900 00:22:55.863 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 457900 00:22:55.863 Received shutdown signal, test time was about 0.995324 seconds 00:22:55.863 00:22:55.863 Latency(us) 00:22:55.863 [2024-11-15T10:03:15.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.863 Verification LBA range: start 0x0 length 0x400 00:22:55.863 Nvme1n1 : 0.98 261.47 16.34 0.00 0.00 241721.17 20097.71 228939.09 00:22:55.863 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.863 Verification LBA range: start 0x0 length 0x400 00:22:55.863 Nvme2n1 : 0.99 258.02 16.13 0.00 0.00 240353.92 18677.76 249910.61 00:22:55.863 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.863 Verification LBA range: start 0x0 length 0x400 00:22:55.863 Nvme3n1 : 0.98 260.29 16.27 0.00 0.00 233093.55 25886.72 244667.73 00:22:55.863 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.863 Verification LBA range: start 0x0 length 0x400 00:22:55.863 Nvme4n1 : 0.98 260.03 16.25 0.00 0.00 228320.64 14199.47 246415.36 00:22:55.863 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.863 Verification LBA range: start 0x0 length 0x400 00:22:55.863 Nvme5n1 : 0.99 258.92 16.18 0.00 0.00 224867.84 21080.75 246415.36 00:22:55.863 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.863 Verification LBA range: start 0x0 length 0x400 00:22:55.864 Nvme6n1 : 0.96 200.24 12.51 0.00 0.00 283458.56 19551.57 242920.11 00:22:55.864 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.864 Verification LBA range: start 0x0 length 0x400 00:22:55.864 Nvme7n1 : 0.97 262.81 16.43 0.00 0.00 211535.57 18022.40 244667.73 00:22:55.864 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.864 Verification LBA range: start 0x0 length 0x400 00:22:55.864 Nvme8n1 : 0.99 257.45 16.09 0.00 0.00 211573.87 13871.79 241172.48 00:22:55.864 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.864 Verification LBA range: start 0x0 length 0x400 00:22:55.864 Nvme9n1 : 0.97 198.93 12.43 0.00 0.00 266302.01 13216.43 253405.87 00:22:55.864 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.864 Verification LBA range: start 0x0 length 0x400 00:22:55.864 Nvme10n1 : 0.97 197.64 12.35 0.00 0.00 261807.50 18896.21 272629.76 00:22:55.864 [2024-11-15T10:03:15.391Z] =================================================================================================================== 00:22:55.864 [2024-11-15T10:03:15.391Z] Total : 2415.80 150.99 0.00 0.00 237853.26 13216.43 272629.76 00:22:55.864 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.245 rmmod nvme_tcp 00:22:57.245 rmmod nvme_fabrics 00:22:57.245 rmmod nvme_keyring 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 457513 ']' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 457513 ']' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 457513' 00:22:57.245 killing process with pid 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 457513 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.245 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.789 00:22:59.789 real 0m8.124s 00:22:59.789 user 0m24.937s 00:22:59.789 sys 0m1.354s 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.789 ************************************ 00:22:59.789 END TEST nvmf_shutdown_tc2 00:22:59.789 ************************************ 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.789 ************************************ 00:22:59.789 START TEST nvmf_shutdown_tc3 00:22:59.789 ************************************ 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.789 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.790 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:22:59.790 00:22:59.790 --- 10.0.0.2 ping statistics --- 00:22:59.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.790 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:22:59.790 00:22:59.790 --- 10.0.0.1 ping statistics --- 00:22:59.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.790 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=459365 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 459365 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 459365 ']' 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:59.790 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.050 [2024-11-15 11:03:19.317983] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:00.050 [2024-11-15 11:03:19.318048] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.050 [2024-11-15 11:03:19.410883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.050 [2024-11-15 11:03:19.445572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.050 [2024-11-15 11:03:19.445601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.050 [2024-11-15 11:03:19.445607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.050 [2024-11-15 11:03:19.445612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.050 [2024-11-15 11:03:19.445616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.050 [2024-11-15 11:03:19.447199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.050 [2024-11-15 11:03:19.447353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.050 [2024-11-15 11:03:19.447502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.050 [2024-11-15 11:03:19.447503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.620 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:00.620 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:00.620 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.620 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:00.620 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.881 [2024-11-15 11:03:20.171275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.881 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.881 Malloc1 00:23:00.881 [2024-11-15 11:03:20.280054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.881 Malloc2 00:23:00.881 Malloc3 00:23:00.881 Malloc4 00:23:01.150 Malloc5 00:23:01.151 Malloc6 00:23:01.151 Malloc7 00:23:01.151 Malloc8 00:23:01.151 Malloc9 00:23:01.151 Malloc10 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=459741 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 459741 /var/tmp/bdevperf.sock 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 459741 ']' 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:01.151 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.411 { 00:23:01.411 "params": { 00:23:01.411 "name": "Nvme$subsystem", 00:23:01.411 "trtype": "$TEST_TRANSPORT", 00:23:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.411 "adrfam": "ipv4", 00:23:01.411 "trsvcid": "$NVMF_PORT", 00:23:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.411 "hdgst": ${hdgst:-false}, 00:23:01.411 "ddgst": ${ddgst:-false} 00:23:01.411 }, 00:23:01.411 "method": "bdev_nvme_attach_controller" 00:23:01.411 } 00:23:01.411 EOF 00:23:01.411 )") 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.411 { 00:23:01.411 "params": { 00:23:01.411 "name": "Nvme$subsystem", 00:23:01.411 "trtype": "$TEST_TRANSPORT", 00:23:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.411 "adrfam": "ipv4", 00:23:01.411 "trsvcid": "$NVMF_PORT", 00:23:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.411 "hdgst": ${hdgst:-false}, 00:23:01.411 "ddgst": ${ddgst:-false} 00:23:01.411 }, 00:23:01.411 "method": "bdev_nvme_attach_controller" 00:23:01.411 } 00:23:01.411 EOF 00:23:01.411 )") 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.411 { 00:23:01.411 "params": { 00:23:01.411 "name": "Nvme$subsystem", 00:23:01.411 "trtype": "$TEST_TRANSPORT", 00:23:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.411 "adrfam": "ipv4", 00:23:01.411 "trsvcid": "$NVMF_PORT", 00:23:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.411 "hdgst": ${hdgst:-false}, 00:23:01.411 "ddgst": ${ddgst:-false} 00:23:01.411 }, 00:23:01.411 "method": "bdev_nvme_attach_controller" 00:23:01.411 } 00:23:01.411 EOF 00:23:01.411 )") 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.411 { 00:23:01.411 "params": { 00:23:01.411 "name": "Nvme$subsystem", 00:23:01.411 "trtype": "$TEST_TRANSPORT", 00:23:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.411 "adrfam": "ipv4", 00:23:01.411 "trsvcid": "$NVMF_PORT", 00:23:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.411 "hdgst": ${hdgst:-false}, 00:23:01.411 "ddgst": ${ddgst:-false} 00:23:01.411 }, 00:23:01.411 "method": "bdev_nvme_attach_controller" 00:23:01.411 } 00:23:01.411 EOF 00:23:01.411 )") 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.411 { 00:23:01.411 "params": { 00:23:01.411 "name": "Nvme$subsystem", 00:23:01.411 "trtype": "$TEST_TRANSPORT", 00:23:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.411 "adrfam": "ipv4", 00:23:01.411 "trsvcid": "$NVMF_PORT", 00:23:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.411 "hdgst": ${hdgst:-false}, 00:23:01.411 "ddgst": ${ddgst:-false} 00:23:01.411 }, 00:23:01.411 "method": "bdev_nvme_attach_controller" 00:23:01.411 } 00:23:01.411 EOF 00:23:01.411 )") 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.411 { 00:23:01.411 "params": { 00:23:01.411 "name": "Nvme$subsystem", 00:23:01.411 "trtype": "$TEST_TRANSPORT", 00:23:01.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.411 "adrfam": "ipv4", 00:23:01.411 "trsvcid": "$NVMF_PORT", 00:23:01.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.411 "hdgst": ${hdgst:-false}, 00:23:01.411 "ddgst": ${ddgst:-false} 00:23:01.411 }, 00:23:01.411 "method": "bdev_nvme_attach_controller" 00:23:01.411 } 00:23:01.411 EOF 00:23:01.411 )") 00:23:01.411 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.411 [2024-11-15 11:03:20.724302] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:01.412 [2024-11-15 11:03:20.724356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459741 ] 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.412 { 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme$subsystem", 00:23:01.412 "trtype": "$TEST_TRANSPORT", 00:23:01.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "$NVMF_PORT", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.412 "hdgst": ${hdgst:-false}, 00:23:01.412 "ddgst": ${ddgst:-false} 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 } 00:23:01.412 EOF 00:23:01.412 )") 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.412 { 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme$subsystem", 00:23:01.412 "trtype": "$TEST_TRANSPORT", 00:23:01.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "$NVMF_PORT", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.412 "hdgst": ${hdgst:-false}, 00:23:01.412 "ddgst": ${ddgst:-false} 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 } 00:23:01.412 EOF 00:23:01.412 )") 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.412 { 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme$subsystem", 00:23:01.412 "trtype": "$TEST_TRANSPORT", 00:23:01.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "$NVMF_PORT", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.412 "hdgst": ${hdgst:-false}, 00:23:01.412 "ddgst": ${ddgst:-false} 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 } 00:23:01.412 EOF 00:23:01.412 )") 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.412 { 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme$subsystem", 00:23:01.412 "trtype": "$TEST_TRANSPORT", 00:23:01.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "$NVMF_PORT", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.412 "hdgst": ${hdgst:-false}, 00:23:01.412 "ddgst": ${ddgst:-false} 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 } 00:23:01.412 EOF 00:23:01.412 )") 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:01.412 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme1", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme2", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme3", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme4", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme5", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme6", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme7", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme8", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme9", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 },{ 00:23:01.412 "params": { 00:23:01.412 "name": "Nvme10", 00:23:01.412 "trtype": "tcp", 00:23:01.412 "traddr": "10.0.0.2", 00:23:01.412 "adrfam": "ipv4", 00:23:01.412 "trsvcid": "4420", 00:23:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.412 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.412 "hdgst": false, 00:23:01.412 "ddgst": false 00:23:01.412 }, 00:23:01.412 "method": "bdev_nvme_attach_controller" 00:23:01.412 }' 00:23:01.412 [2024-11-15 11:03:20.813903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.412 [2024-11-15 11:03:20.850417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.792 Running I/O for 10 seconds... 00:23:02.792 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:02.792 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:02.793 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.793 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.793 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:03.052 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:03.312 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.571 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 459365 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 459365 ']' 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 459365 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 459365 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 459365' 00:23:03.846 killing process with pid 459365 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 459365 00:23:03.846 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 459365 00:23:03.846 [2024-11-15 11:03:23.176392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8543b0 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.846 [2024-11-15 11:03:23.177301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.177528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856f80 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.847 [2024-11-15 11:03:23.179724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.179847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854d70 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.180995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.848 [2024-11-15 11:03:23.181165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.181170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.181174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.181179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.181184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.181188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855260 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.181755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855730 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.182628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855c00 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.849 [2024-11-15 11:03:23.183643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.183866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8560d0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.850 [2024-11-15 11:03:23.184747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.184864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8565c0 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.851 [2024-11-15 11:03:23.185549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.185614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.194772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.194791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.194799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x856a90 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03150 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390d0 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26610 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c820 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102fcb0 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e850 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.852 [2024-11-15 11:03:23.195832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c790 is same with the state(6) to be set 00:23:03.852 [2024-11-15 11:03:23.195854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.852 [2024-11-15 11:03:23.195863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.195879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.195895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.195911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0cfc0 is same with the state(6) to be set 00:23:03.853 [2024-11-15 11:03:23.195941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.195949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.195965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.195986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.195994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.196002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0ecb0 is same with the state(6) to be set 00:23:03.853 [2024-11-15 11:03:23.196031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.196039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.196055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.196071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.853 [2024-11-15 11:03:23.196086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1039ee0 is same with the state(6) to be set 00:23:03.853 [2024-11-15 11:03:23.196451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.853 [2024-11-15 11:03:23.196923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.853 [2024-11-15 11:03:23.196930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.196940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.196947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.196956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.196964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.196973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.196980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.196990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.196997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.854 [2024-11-15 11:03:23.197556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.854 [2024-11-15 11:03:23.197568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12900 is same with the state(6) to be set 00:23:03.854 [2024-11-15 11:03:23.216896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc03150 (9): Bad file descriptor 00:23:03.854 [2024-11-15 11:03:23.216943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10390d0 (9): Bad file descriptor 00:23:03.854 [2024-11-15 11:03:23.216962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26610 (9): Bad file descriptor 00:23:03.854 [2024-11-15 11:03:23.216980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107c820 (9): Bad file descriptor 00:23:03.854 [2024-11-15 11:03:23.216995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102fcb0 (9): Bad file descriptor 00:23:03.855 [2024-11-15 11:03:23.217008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e850 (9): Bad file descriptor 00:23:03.855 [2024-11-15 11:03:23.217021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c790 (9): Bad file descriptor 00:23:03.855 [2024-11-15 11:03:23.217039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0cfc0 (9): Bad file descriptor 00:23:03.855 [2024-11-15 11:03:23.217056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0ecb0 (9): Bad file descriptor 00:23:03.855 [2024-11-15 11:03:23.217076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1039ee0 (9): Bad file descriptor 00:23:03.855 [2024-11-15 11:03:23.217122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.855 [2024-11-15 11:03:23.217680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.855 [2024-11-15 11:03:23.217690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.217982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.217989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.218223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.218230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.856 [2024-11-15 11:03:23.219744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.856 [2024-11-15 11:03:23.219754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.219996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.857 [2024-11-15 11:03:23.220426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.857 [2024-11-15 11:03:23.220433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.220709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.220984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:03.858 [2024-11-15 11:03:23.223698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:03.858 [2024-11-15 11:03:23.224161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.858 [2024-11-15 11:03:23.224178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e850 with addr=10.0.0.2, port=4420 00:23:03.858 [2024-11-15 11:03:23.224187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e850 is same with the state(6) to be set 00:23:03.858 [2024-11-15 11:03:23.224510] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224553] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224611] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224651] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224687] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224729] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224770] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.858 [2024-11-15 11:03:23.224782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:03.858 [2024-11-15 11:03:23.225026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.858 [2024-11-15 11:03:23.225040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0ecb0 with addr=10.0.0.2, port=4420 00:23:03.858 [2024-11-15 11:03:23.225048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0ecb0 is same with the state(6) to be set 00:23:03.858 [2024-11-15 11:03:23.225060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e850 (9): Bad file descriptor 00:23:03.858 [2024-11-15 11:03:23.225951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.858 [2024-11-15 11:03:23.225966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1039ee0 with addr=10.0.0.2, port=4420 00:23:03.858 [2024-11-15 11:03:23.225974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1039ee0 is same with the state(6) to be set 00:23:03.858 [2024-11-15 11:03:23.225984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0ecb0 (9): Bad file descriptor 00:23:03.858 [2024-11-15 11:03:23.225994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:03.858 [2024-11-15 11:03:23.226001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:03.858 [2024-11-15 11:03:23.226010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:03.858 [2024-11-15 11:03:23.226023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:03.858 [2024-11-15 11:03:23.226101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1039ee0 (9): Bad file descriptor 00:23:03.858 [2024-11-15 11:03:23.226112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:03.858 [2024-11-15 11:03:23.226118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:03.858 [2024-11-15 11:03:23.226125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:03.858 [2024-11-15 11:03:23.226132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:03.858 [2024-11-15 11:03:23.226182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:03.858 [2024-11-15 11:03:23.226190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:03.858 [2024-11-15 11:03:23.226197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:03.858 [2024-11-15 11:03:23.226203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:03.858 [2024-11-15 11:03:23.227029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.858 [2024-11-15 11:03:23.227185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.858 [2024-11-15 11:03:23.227195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.859 [2024-11-15 11:03:23.227843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.859 [2024-11-15 11:03:23.227850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.227984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.227993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.228119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.228127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe139e0 is same with the state(6) to be set 00:23:03.860 [2024-11-15 11:03:23.229414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.860 [2024-11-15 11:03:23.229770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.860 [2024-11-15 11:03:23.229777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.229991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.229998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.861 [2024-11-15 11:03:23.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.861 [2024-11-15 11:03:23.230471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.230480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.230489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.230498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.230507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.230517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.230532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14b30 is same with the state(6) to be set 00:23:03.862 [2024-11-15 11:03:23.231811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.231987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.231996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.862 [2024-11-15 11:03:23.232420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.862 [2024-11-15 11:03:23.232427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.232902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.232910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1011720 is same with the state(6) to be set 00:23:03.863 [2024-11-15 11:03:23.234196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.863 [2024-11-15 11:03:23.234376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.863 [2024-11-15 11:03:23.234384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.234989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.234998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.864 [2024-11-15 11:03:23.235005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.864 [2024-11-15 11:03:23.235015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.235310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.235318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1012a40 is same with the state(6) to be set 00:23:03.865 [2024-11-15 11:03:23.236589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.865 [2024-11-15 11:03:23.236974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.865 [2024-11-15 11:03:23.236981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.236990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.237436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.237445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.242037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.242082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.242093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.242110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.242118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.242128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.242135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.242146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.866 [2024-11-15 11:03:23.242154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.866 [2024-11-15 11:03:23.242164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.242340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.242349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1013d60 is same with the state(6) to be set 00:23:03.867 [2024-11-15 11:03:23.243699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.243990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.243998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.244007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.244015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.244024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.244032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.244041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.244049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.244058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.244066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.244075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.244083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.867 [2024-11-15 11:03:23.244092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.867 [2024-11-15 11:03:23.244100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.868 [2024-11-15 11:03:23.244655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.868 [2024-11-15 11:03:23.244665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.244814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.244822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1015030 is same with the state(6) to be set 00:23:03.869 [2024-11-15 11:03:23.246123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.869 [2024-11-15 11:03:23.246476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.869 [2024-11-15 11:03:23.246483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.246988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.246997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.247005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.247015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.870 [2024-11-15 11:03:23.247023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.870 [2024-11-15 11:03:23.247032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.871 [2024-11-15 11:03:23.247273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.871 [2024-11-15 11:03:23.247281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10162f0 is same with the state(6) to be set 00:23:03.871 [2024-11-15 11:03:23.249307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:03.871 [2024-11-15 11:03:23.249348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:03.871 [2024-11-15 11:03:23.249361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:03.871 [2024-11-15 11:03:23.249373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:03.871 [2024-11-15 11:03:23.249460] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:03.871 [2024-11-15 11:03:23.249477] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:03.871 [2024-11-15 11:03:23.249489] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:03.871 [2024-11-15 11:03:23.249586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:03.871 [2024-11-15 11:03:23.249601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:03.871 task offset: 24576 on job bdev=Nvme2n1 fails 00:23:03.871 00:23:03.871 Latency(us) 00:23:03.871 [2024-11-15T10:03:23.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.871 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme1n1 ended in about 0.97 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme1n1 : 0.97 198.60 12.41 66.20 0.00 239045.33 16274.77 253405.87 00:23:03.871 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme2n1 : 0.96 199.17 12.45 66.39 0.00 233610.45 22282.24 248162.99 00:23:03.871 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme3n1 : 0.97 197.15 12.32 65.72 0.00 231254.83 21189.97 255153.49 00:23:03.871 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme4n1 : 0.98 196.67 12.29 65.56 0.00 227214.40 11905.71 255153.49 00:23:03.871 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme5n1 : 0.97 198.34 12.40 66.11 0.00 220346.72 5980.16 234181.97 00:23:03.871 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme6n1 : 0.98 130.80 8.17 65.40 0.00 291134.01 20097.71 267386.88 00:23:03.871 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme7n1 ended in about 0.98 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme7n1 : 0.98 195.71 12.23 65.24 0.00 214141.87 31020.37 235929.60 00:23:03.871 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme8n1 ended in about 0.99 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme8n1 : 0.99 194.32 12.14 64.77 0.00 211117.44 18677.76 234181.97 00:23:03.871 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme9n1 ended in about 0.99 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme9n1 : 0.99 129.23 8.08 64.61 0.00 276227.70 22719.15 249910.61 00:23:03.871 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.871 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:03.871 Verification LBA range: start 0x0 length 0x400 00:23:03.871 Nvme10n1 : 0.99 128.91 8.06 64.45 0.00 270864.21 16820.91 274377.39 00:23:03.871 [2024-11-15T10:03:23.398Z] =================================================================================================================== 00:23:03.871 [2024-11-15T10:03:23.398Z] Total : 1768.89 110.56 654.45 0.00 238421.67 5980.16 274377.39 00:23:03.871 [2024-11-15 11:03:23.277191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:03.871 [2024-11-15 11:03:23.277222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:03.871 [2024-11-15 11:03:23.277677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.871 [2024-11-15 11:03:23.277695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c790 with addr=10.0.0.2, port=4420 00:23:03.871 [2024-11-15 11:03:23.277706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c790 is same with the state(6) to be set 00:23:03.871 [2024-11-15 11:03:23.277998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.871 [2024-11-15 11:03:23.278009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0cfc0 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.278016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0cfc0 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.278325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.278335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10390d0 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.278343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390d0 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.278515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.278524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26610 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.278532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26610 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.280419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:03.872 [2024-11-15 11:03:23.280441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:03.872 [2024-11-15 11:03:23.280688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.280702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107c820 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.280710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c820 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.281034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.281044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc03150 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.281051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03150 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.281377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.281387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102fcb0 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.281394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102fcb0 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.281407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c790 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.281420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0cfc0 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.281430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10390d0 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.281440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26610 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.281469] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:03.872 [2024-11-15 11:03:23.281484] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:03.872 [2024-11-15 11:03:23.281496] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:03.872 [2024-11-15 11:03:23.281508] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:03.872 [2024-11-15 11:03:23.281520] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:03.872 [2024-11-15 11:03:23.281596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:03.872 [2024-11-15 11:03:23.281813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.281826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e850 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.281833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e850 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.282035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.282047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0ecb0 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.282055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0ecb0 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.282064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107c820 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.282074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc03150 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.282087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102fcb0 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.282096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.872 [2024-11-15 11:03:23.282471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1039ee0 with addr=10.0.0.2, port=4420 00:23:03.872 [2024-11-15 11:03:23.282478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1039ee0 is same with the state(6) to be set 00:23:03.872 [2024-11-15 11:03:23.282487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e850 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.282496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0ecb0 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.282505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1039ee0 (9): Bad file descriptor 00:23:03.872 [2024-11-15 11:03:23.282626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:03.872 [2024-11-15 11:03:23.282667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:03.872 [2024-11-15 11:03:23.282674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:03.872 [2024-11-15 11:03:23.282701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:03.872 [2024-11-15 11:03:23.282709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:03.873 [2024-11-15 11:03:23.282716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:03.873 [2024-11-15 11:03:23.282722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:04.133 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 459741 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 459741 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 459741 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.073 rmmod nvme_tcp 00:23:05.073 rmmod nvme_fabrics 00:23:05.073 rmmod nvme_keyring 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 459365 ']' 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 459365 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 459365 ']' 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 459365 00:23:05.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (459365) - No such process 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 459365 is not found' 00:23:05.073 Process with pid 459365 is not found 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.073 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.616 00:23:07.616 real 0m7.715s 00:23:07.616 user 0m18.826s 00:23:07.616 sys 0m1.241s 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.616 ************************************ 00:23:07.616 END TEST nvmf_shutdown_tc3 00:23:07.616 ************************************ 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.616 ************************************ 00:23:07.616 START TEST nvmf_shutdown_tc4 00:23:07.616 ************************************ 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:07.616 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:07.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:07.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:07.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:07.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.617 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:23:07.618 00:23:07.618 --- 10.0.0.2 ping statistics --- 00:23:07.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.618 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:23:07.618 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:07.618 00:23:07.618 --- 10.0.0.1 ping statistics --- 00:23:07.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.618 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=460981 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 460981 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 460981 ']' 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:07.618 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.618 [2024-11-15 11:03:27.119475] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:07.618 [2024-11-15 11:03:27.119541] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.878 [2024-11-15 11:03:27.215856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.878 [2024-11-15 11:03:27.249423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.878 [2024-11-15 11:03:27.249453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.878 [2024-11-15 11:03:27.249459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.878 [2024-11-15 11:03:27.249463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.878 [2024-11-15 11:03:27.249467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.878 [2024-11-15 11:03:27.250811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.878 [2024-11-15 11:03:27.250951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.878 [2024-11-15 11:03:27.251101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.878 [2024-11-15 11:03:27.251103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.448 [2024-11-15 11:03:27.966352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.448 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.709 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.709 Malloc1 00:23:08.709 [2024-11-15 11:03:28.076333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.709 Malloc2 00:23:08.709 Malloc3 00:23:08.709 Malloc4 00:23:08.709 Malloc5 00:23:08.970 Malloc6 00:23:08.970 Malloc7 00:23:08.970 Malloc8 00:23:08.970 Malloc9 00:23:08.970 Malloc10 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=461271 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:08.970 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:09.229 [2024-11-15 11:03:28.554994] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 460981 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 460981 ']' 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 460981 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 460981 00:23:14.516 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:14.517 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:14.517 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 460981' 00:23:14.517 killing process with pid 460981 00:23:14.517 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 460981 00:23:14.517 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 460981 00:23:14.517 [2024-11-15 11:03:33.552196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70f20 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713f0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.552876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d718c0 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.553155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70a50 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.553178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70a50 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.553184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70a50 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.553189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70a50 is same with the state(6) to be set 00:23:14.517 [2024-11-15 11:03:33.553194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70a50 is same with the state(6) to be set 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 [2024-11-15 11:03:33.554833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 [2024-11-15 11:03:33.555702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.517 starting I/O failed: -6 00:23:14.517 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 [2024-11-15 11:03:33.558558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.518 NVMe io qpair process completion error 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 Write completed with error (sct=0, sc=8) 00:23:14.518 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 [2024-11-15 11:03:33.559847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 [2024-11-15 11:03:33.560063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 [2024-11-15 11:03:33.560082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:23:14.519 [2024-11-15 11:03:33.560088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:23:14.519 [2024-11-15 11:03:33.560093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 [2024-11-15 11:03:33.560348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with Write completed with error (sct=0, sc=8) 00:23:14.519 the state(6) to be set 00:23:14.519 [2024-11-15 11:03:33.560370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:23:14.519 [2024-11-15 11:03:33.560376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with Write completed with error (sct=0, sc=8) 00:23:14.519 the state(6) to be set 00:23:14.519 [2024-11-15 11:03:33.560382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:23:14.519 starting I/O failed: -6 00:23:14.519 [2024-11-15 11:03:33.560388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 [2024-11-15 11:03:33.560393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:23:14.519 starting I/O failed: -6 00:23:14.519 [2024-11-15 11:03:33.560399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:23:14.519 [2024-11-15 11:03:33.560405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71d90 is same with the state(6) to be set 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 [2024-11-15 11:03:33.560769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 [2024-11-15 11:03:33.561709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.519 starting I/O failed: -6 00:23:14.519 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 [2024-11-15 11:03:33.563102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.520 NVMe io qpair process completion error 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 [2024-11-15 11:03:33.564492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 Write completed with error (sct=0, sc=8) 00:23:14.520 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 [2024-11-15 11:03:33.565332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 [2024-11-15 11:03:33.566297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.521 starting I/O failed: -6 00:23:14.521 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 [2024-11-15 11:03:33.568537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.522 NVMe io qpair process completion error 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 [2024-11-15 11:03:33.569952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.522 Write completed with error (sct=0, sc=8) 00:23:14.522 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 [2024-11-15 11:03:33.571388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 [2024-11-15 11:03:33.573996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.523 NVMe io qpair process completion error 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 [2024-11-15 11:03:33.575278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 Write completed with error (sct=0, sc=8) 00:23:14.523 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 [2024-11-15 11:03:33.576210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 [2024-11-15 11:03:33.577121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.524 Write completed with error (sct=0, sc=8) 00:23:14.524 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 [2024-11-15 11:03:33.578760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.525 NVMe io qpair process completion error 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 [2024-11-15 11:03:33.580026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 [2024-11-15 11:03:33.580848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 Write completed with error (sct=0, sc=8) 00:23:14.525 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 [2024-11-15 11:03:33.581789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 [2024-11-15 11:03:33.584595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.526 NVMe io qpair process completion error 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.526 starting I/O failed: -6 00:23:14.526 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 [2024-11-15 11:03:33.585883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 [2024-11-15 11:03:33.586799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 [2024-11-15 11:03:33.587724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.527 Write completed with error (sct=0, sc=8) 00:23:14.527 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 [2024-11-15 11:03:33.589346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.528 NVMe io qpair process completion error 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 [2024-11-15 11:03:33.590628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 starting I/O failed: -6 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.528 [2024-11-15 11:03:33.591448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.528 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 [2024-11-15 11:03:33.592376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 [2024-11-15 11:03:33.594511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.529 NVMe io qpair process completion error 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 Write completed with error (sct=0, sc=8) 00:23:14.529 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 [2024-11-15 11:03:33.596196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 [2024-11-15 11:03:33.597124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.530 starting I/O failed: -6 00:23:14.530 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 [2024-11-15 11:03:33.598794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.531 NVMe io qpair process completion error 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 [2024-11-15 11:03:33.600154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 starting I/O failed: -6 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.531 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 [2024-11-15 11:03:33.600982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 [2024-11-15 11:03:33.601939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.532 starting I/O failed: -6 00:23:14.532 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 Write completed with error (sct=0, sc=8) 00:23:14.533 starting I/O failed: -6 00:23:14.533 [2024-11-15 11:03:33.604110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:14.533 NVMe io qpair process completion error 00:23:14.533 Initializing NVMe Controllers 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:14.533 Controller IO queue size 128, less than required. 00:23:14.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:14.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:14.533 Initialization complete. Launching workers. 00:23:14.533 ======================================================== 00:23:14.533 Latency(us) 00:23:14.533 Device Information : IOPS MiB/s Average min max 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1886.12 81.04 67882.99 869.50 123285.92 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1885.91 81.04 67913.65 675.89 124686.98 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1889.09 81.17 67831.95 560.44 122659.70 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1890.79 81.24 67803.86 931.49 124261.05 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1903.32 81.78 67381.13 678.72 125005.74 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1896.10 81.47 67677.44 740.35 119009.43 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1885.70 81.03 68073.99 676.70 129331.84 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1898.01 81.56 67655.75 926.74 123712.40 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1884.21 80.96 68176.27 828.39 133515.57 00:23:14.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1886.76 81.07 67392.95 625.52 124243.32 00:23:14.533 ======================================================== 00:23:14.533 Total : 18906.02 812.37 67778.46 560.44 133515.57 00:23:14.533 00:23:14.533 [2024-11-15 11:03:33.609601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f560 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x660740 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65fef0 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x661720 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x660410 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65fbc0 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x661900 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x660a70 is same with the state(6) to be set 00:23:14.533 [2024-11-15 11:03:33.609879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x661ae0 is same with the state(6) to be set 00:23:14.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:14.533 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 461271 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 461271 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 461271 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.474 rmmod nvme_tcp 00:23:15.474 rmmod nvme_fabrics 00:23:15.474 rmmod nvme_keyring 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 460981 ']' 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 460981 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 460981 ']' 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 460981 00:23:15.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (460981) - No such process 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 460981 is not found' 00:23:15.474 Process with pid 460981 is not found 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.474 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.015 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.015 00:23:18.015 real 0m10.256s 00:23:18.015 user 0m27.970s 00:23:18.015 sys 0m4.018s 00:23:18.015 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:18.015 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:18.015 ************************************ 00:23:18.015 END TEST nvmf_shutdown_tc4 00:23:18.015 ************************************ 00:23:18.015 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:18.015 00:23:18.015 real 0m43.581s 00:23:18.015 user 1m45.663s 00:23:18.015 sys 0m14.151s 00:23:18.015 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:18.015 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:18.015 ************************************ 00:23:18.015 END TEST nvmf_shutdown 00:23:18.015 ************************************ 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:18.015 ************************************ 00:23:18.015 START TEST nvmf_nsid 00:23:18.015 ************************************ 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:18.015 * Looking for test storage... 00:23:18.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.015 --rc genhtml_branch_coverage=1 00:23:18.015 --rc genhtml_function_coverage=1 00:23:18.015 --rc genhtml_legend=1 00:23:18.015 --rc geninfo_all_blocks=1 00:23:18.015 --rc geninfo_unexecuted_blocks=1 00:23:18.015 00:23:18.015 ' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.015 --rc genhtml_branch_coverage=1 00:23:18.015 --rc genhtml_function_coverage=1 00:23:18.015 --rc genhtml_legend=1 00:23:18.015 --rc geninfo_all_blocks=1 00:23:18.015 --rc geninfo_unexecuted_blocks=1 00:23:18.015 00:23:18.015 ' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.015 --rc genhtml_branch_coverage=1 00:23:18.015 --rc genhtml_function_coverage=1 00:23:18.015 --rc genhtml_legend=1 00:23:18.015 --rc geninfo_all_blocks=1 00:23:18.015 --rc geninfo_unexecuted_blocks=1 00:23:18.015 00:23:18.015 ' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.015 --rc genhtml_branch_coverage=1 00:23:18.015 --rc genhtml_function_coverage=1 00:23:18.015 --rc genhtml_legend=1 00:23:18.015 --rc geninfo_all_blocks=1 00:23:18.015 --rc geninfo_unexecuted_blocks=1 00:23:18.015 00:23:18.015 ' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.015 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.016 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.167 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:26.168 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:26.168 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:26.168 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:26.168 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:23:26.168 00:23:26.168 --- 10.0.0.2 ping statistics --- 00:23:26.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.168 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:23:26.168 00:23:26.168 --- 10.0.0.1 ping statistics --- 00:23:26.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.168 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.168 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=466695 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 466695 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 466695 ']' 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.169 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.169 [2024-11-15 11:03:44.921241] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:26.169 [2024-11-15 11:03:44.921311] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.169 [2024-11-15 11:03:45.019902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.169 [2024-11-15 11:03:45.071283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.169 [2024-11-15 11:03:45.071333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.169 [2024-11-15 11:03:45.071342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.169 [2024-11-15 11:03:45.071349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.169 [2024-11-15 11:03:45.071355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.169 [2024-11-15 11:03:45.072137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.429 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.429 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:26.429 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.429 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.429 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.429 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=466973 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=37474278-194a-4968-9fcb-16e526591ec2 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1afb90cb-8882-4b8b-90c6-0ba47daeec87 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=12e79c07-bbee-447f-8959-d1628d47b008 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.430 null0 00:23:26.430 null1 00:23:26.430 [2024-11-15 11:03:45.834305] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:26.430 [2024-11-15 11:03:45.834369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466973 ] 00:23:26.430 null2 00:23:26.430 [2024-11-15 11:03:45.839810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.430 [2024-11-15 11:03:45.864119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 466973 /var/tmp/tgt2.sock 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 466973 ']' 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:26.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.430 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:26.430 [2024-11-15 11:03:45.927080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.690 [2024-11-15 11:03:45.979433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.950 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.951 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:26.951 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:27.211 [2024-11-15 11:03:46.548872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.211 [2024-11-15 11:03:46.565065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:27.211 nvme0n1 nvme0n2 00:23:27.211 nvme1n1 00:23:27.211 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:27.211 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:27.211 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:28.594 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 37474278-194a-4968-9fcb-16e526591ec2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=37474278194a49689fcb16e526591ec2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 37474278194A49689FCB16E526591EC2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 37474278194A49689FCB16E526591EC2 == \3\7\4\7\4\2\7\8\1\9\4\A\4\9\6\8\9\F\C\B\1\6\E\5\2\6\5\9\1\E\C\2 ]] 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1afb90cb-8882-4b8b-90c6-0ba47daeec87 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1afb90cb88824b8b90c60ba47daeec87 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1AFB90CB88824B8B90C60BA47DAEEC87 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1AFB90CB88824B8B90C60BA47DAEEC87 == \1\A\F\B\9\0\C\B\8\8\8\2\4\B\8\B\9\0\C\6\0\B\A\4\7\D\A\E\E\C\8\7 ]] 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 12e79c07-bbee-447f-8959-d1628d47b008 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=12e79c07bbee447f8959d1628d47b008 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 12E79C07BBEE447F8959D1628D47B008 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 12E79C07BBEE447F8959D1628D47B008 == \1\2\E\7\9\C\0\7\B\B\E\E\4\4\7\F\8\9\5\9\D\1\6\2\8\D\4\7\B\0\0\8 ]] 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 466973 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 466973 ']' 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 466973 00:23:29.980 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 466973 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 466973' 00:23:30.240 killing process with pid 466973 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 466973 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 466973 00:23:30.240 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.500 rmmod nvme_tcp 00:23:30.500 rmmod nvme_fabrics 00:23:30.500 rmmod nvme_keyring 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 466695 ']' 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 466695 00:23:30.500 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 466695 ']' 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 466695 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 466695 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 466695' 00:23:30.501 killing process with pid 466695 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 466695 00:23:30.501 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 466695 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.501 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.762 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.762 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.762 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.762 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.762 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.675 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.675 00:23:32.675 real 0m15.014s 00:23:32.675 user 0m11.349s 00:23:32.675 sys 0m7.020s 00:23:32.675 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:32.675 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:32.675 ************************************ 00:23:32.675 END TEST nvmf_nsid 00:23:32.675 ************************************ 00:23:32.675 11:03:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:32.675 00:23:32.675 real 12m59.921s 00:23:32.675 user 27m5.530s 00:23:32.675 sys 3m55.993s 00:23:32.675 11:03:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:32.675 11:03:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:32.675 ************************************ 00:23:32.675 END TEST nvmf_target_extra 00:23:32.675 ************************************ 00:23:32.675 11:03:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:32.675 11:03:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:32.675 11:03:52 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:32.675 11:03:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.935 ************************************ 00:23:32.935 START TEST nvmf_host 00:23:32.935 ************************************ 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:32.935 * Looking for test storage... 00:23:32.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.935 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:32.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.936 --rc genhtml_branch_coverage=1 00:23:32.936 --rc genhtml_function_coverage=1 00:23:32.936 --rc genhtml_legend=1 00:23:32.936 --rc geninfo_all_blocks=1 00:23:32.936 --rc geninfo_unexecuted_blocks=1 00:23:32.936 00:23:32.936 ' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:32.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.936 --rc genhtml_branch_coverage=1 00:23:32.936 --rc genhtml_function_coverage=1 00:23:32.936 --rc genhtml_legend=1 00:23:32.936 --rc geninfo_all_blocks=1 00:23:32.936 --rc geninfo_unexecuted_blocks=1 00:23:32.936 00:23:32.936 ' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:32.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.936 --rc genhtml_branch_coverage=1 00:23:32.936 --rc genhtml_function_coverage=1 00:23:32.936 --rc genhtml_legend=1 00:23:32.936 --rc geninfo_all_blocks=1 00:23:32.936 --rc geninfo_unexecuted_blocks=1 00:23:32.936 00:23:32.936 ' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:32.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.936 --rc genhtml_branch_coverage=1 00:23:32.936 --rc genhtml_function_coverage=1 00:23:32.936 --rc genhtml_legend=1 00:23:32.936 --rc geninfo_all_blocks=1 00:23:32.936 --rc geninfo_unexecuted_blocks=1 00:23:32.936 00:23:32.936 ' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.936 11:03:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.197 11:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:33.197 11:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:33.197 11:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:33.197 11:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:33.197 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.198 ************************************ 00:23:33.198 START TEST nvmf_multicontroller 00:23:33.198 ************************************ 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:33.198 * Looking for test storage... 00:23:33.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.198 --rc genhtml_branch_coverage=1 00:23:33.198 --rc genhtml_function_coverage=1 00:23:33.198 --rc genhtml_legend=1 00:23:33.198 --rc geninfo_all_blocks=1 00:23:33.198 --rc geninfo_unexecuted_blocks=1 00:23:33.198 00:23:33.198 ' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.198 --rc genhtml_branch_coverage=1 00:23:33.198 --rc genhtml_function_coverage=1 00:23:33.198 --rc genhtml_legend=1 00:23:33.198 --rc geninfo_all_blocks=1 00:23:33.198 --rc geninfo_unexecuted_blocks=1 00:23:33.198 00:23:33.198 ' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.198 --rc genhtml_branch_coverage=1 00:23:33.198 --rc genhtml_function_coverage=1 00:23:33.198 --rc genhtml_legend=1 00:23:33.198 --rc geninfo_all_blocks=1 00:23:33.198 --rc geninfo_unexecuted_blocks=1 00:23:33.198 00:23:33.198 ' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.198 --rc genhtml_branch_coverage=1 00:23:33.198 --rc genhtml_function_coverage=1 00:23:33.198 --rc genhtml_legend=1 00:23:33.198 --rc geninfo_all_blocks=1 00:23:33.198 --rc geninfo_unexecuted_blocks=1 00:23:33.198 00:23:33.198 ' 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.198 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.460 11:03:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:41.597 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.597 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:41.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:41.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:41.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.598 11:03:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:23:41.598 00:23:41.598 --- 10.0.0.2 ping statistics --- 00:23:41.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.598 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:23:41.598 00:23:41.598 --- 10.0.0.1 ping statistics --- 00:23:41.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.598 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=472070 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 472070 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 472070 ']' 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:41.598 11:04:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.598 [2024-11-15 11:04:00.353191] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:41.599 [2024-11-15 11:04:00.353262] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.599 [2024-11-15 11:04:00.452908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.599 [2024-11-15 11:04:00.504602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.599 [2024-11-15 11:04:00.504652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.599 [2024-11-15 11:04:00.504665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.599 [2024-11-15 11:04:00.504672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.599 [2024-11-15 11:04:00.504678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.599 [2024-11-15 11:04:00.506596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.599 [2024-11-15 11:04:00.506803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.599 [2024-11-15 11:04:00.506803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.860 [2024-11-15 11:04:01.215761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.860 Malloc0 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.860 [2024-11-15 11:04:01.291958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.860 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.861 [2024-11-15 11:04:01.303830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.861 Malloc1 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=472283 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 472283 /var/tmp/bdevperf.sock 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 472283 ']' 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:41.861 11:04:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.804 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:42.804 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:42.804 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:42.804 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.804 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.066 NVMe0n1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.066 1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.066 request: 00:23:43.066 { 00:23:43.066 "name": "NVMe0", 00:23:43.066 "trtype": "tcp", 00:23:43.066 "traddr": "10.0.0.2", 00:23:43.066 "adrfam": "ipv4", 00:23:43.066 "trsvcid": "4420", 00:23:43.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.066 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:43.066 "hostaddr": "10.0.0.1", 00:23:43.066 "prchk_reftag": false, 00:23:43.066 "prchk_guard": false, 00:23:43.066 "hdgst": false, 00:23:43.066 "ddgst": false, 00:23:43.066 "allow_unrecognized_csi": false, 00:23:43.066 "method": "bdev_nvme_attach_controller", 00:23:43.066 "req_id": 1 00:23:43.066 } 00:23:43.066 Got JSON-RPC error response 00:23:43.066 response: 00:23:43.066 { 00:23:43.066 "code": -114, 00:23:43.066 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:43.066 } 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.066 request: 00:23:43.066 { 00:23:43.066 "name": "NVMe0", 00:23:43.066 "trtype": "tcp", 00:23:43.066 "traddr": "10.0.0.2", 00:23:43.066 "adrfam": "ipv4", 00:23:43.066 "trsvcid": "4420", 00:23:43.066 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.066 "hostaddr": "10.0.0.1", 00:23:43.066 "prchk_reftag": false, 00:23:43.066 "prchk_guard": false, 00:23:43.066 "hdgst": false, 00:23:43.066 "ddgst": false, 00:23:43.066 "allow_unrecognized_csi": false, 00:23:43.066 "method": "bdev_nvme_attach_controller", 00:23:43.066 "req_id": 1 00:23:43.066 } 00:23:43.066 Got JSON-RPC error response 00:23:43.066 response: 00:23:43.066 { 00:23:43.066 "code": -114, 00:23:43.066 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:43.066 } 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.066 request: 00:23:43.066 { 00:23:43.066 "name": "NVMe0", 00:23:43.066 "trtype": "tcp", 00:23:43.066 "traddr": "10.0.0.2", 00:23:43.066 "adrfam": "ipv4", 00:23:43.066 "trsvcid": "4420", 00:23:43.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.066 "hostaddr": "10.0.0.1", 00:23:43.066 "prchk_reftag": false, 00:23:43.066 "prchk_guard": false, 00:23:43.066 "hdgst": false, 00:23:43.066 "ddgst": false, 00:23:43.066 "multipath": "disable", 00:23:43.066 "allow_unrecognized_csi": false, 00:23:43.066 "method": "bdev_nvme_attach_controller", 00:23:43.066 "req_id": 1 00:23:43.066 } 00:23:43.066 Got JSON-RPC error response 00:23:43.066 response: 00:23:43.066 { 00:23:43.066 "code": -114, 00:23:43.066 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:43.066 } 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.066 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.067 request: 00:23:43.067 { 00:23:43.067 "name": "NVMe0", 00:23:43.067 "trtype": "tcp", 00:23:43.067 "traddr": "10.0.0.2", 00:23:43.067 "adrfam": "ipv4", 00:23:43.067 "trsvcid": "4420", 00:23:43.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.067 "hostaddr": "10.0.0.1", 00:23:43.067 "prchk_reftag": false, 00:23:43.067 "prchk_guard": false, 00:23:43.067 "hdgst": false, 00:23:43.067 "ddgst": false, 00:23:43.067 "multipath": "failover", 00:23:43.067 "allow_unrecognized_csi": false, 00:23:43.067 "method": "bdev_nvme_attach_controller", 00:23:43.067 "req_id": 1 00:23:43.067 } 00:23:43.067 Got JSON-RPC error response 00:23:43.067 response: 00:23:43.067 { 00:23:43.067 "code": -114, 00:23:43.067 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:43.067 } 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.067 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.327 NVMe0n1 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:43.327 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:43.328 11:04:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.712 { 00:23:44.712 "results": [ 00:23:44.712 { 00:23:44.712 "job": "NVMe0n1", 00:23:44.712 "core_mask": "0x1", 00:23:44.712 "workload": "write", 00:23:44.712 "status": "finished", 00:23:44.712 "queue_depth": 128, 00:23:44.712 "io_size": 4096, 00:23:44.712 "runtime": 1.006194, 00:23:44.712 "iops": 28419.966726098544, 00:23:44.712 "mibps": 111.01549502382244, 00:23:44.712 "io_failed": 0, 00:23:44.712 "io_timeout": 0, 00:23:44.713 "avg_latency_us": 4492.552797127803, 00:23:44.713 "min_latency_us": 2375.68, 00:23:44.713 "max_latency_us": 15947.093333333334 00:23:44.713 } 00:23:44.713 ], 00:23:44.713 "core_count": 1 00:23:44.713 } 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 472283 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 472283 ']' 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 472283 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 472283 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 472283' 00:23:44.713 killing process with pid 472283 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 472283 00:23:44.713 11:04:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 472283 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:44.713 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.713 [2024-11-15 11:04:01.434748] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:44.713 [2024-11-15 11:04:01.434820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472283 ] 00:23:44.713 [2024-11-15 11:04:01.529045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.713 [2024-11-15 11:04:01.582811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.713 [2024-11-15 11:04:02.767844] bdev.c:4897:bdev_name_add: *ERROR*: Bdev name ab0c7b35-e910-4d5c-9325-fac076b0ba6c already exists 00:23:44.713 [2024-11-15 11:04:02.767872] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:ab0c7b35-e910-4d5c-9325-fac076b0ba6c alias for bdev NVMe1n1 00:23:44.713 [2024-11-15 11:04:02.767881] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:44.713 Running I/O for 1 seconds... 00:23:44.713 28406.00 IOPS, 110.96 MiB/s 00:23:44.713 Latency(us) 00:23:44.713 [2024-11-15T10:04:04.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.713 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:44.713 NVMe0n1 : 1.01 28419.97 111.02 0.00 0.00 4492.55 2375.68 15947.09 00:23:44.713 [2024-11-15T10:04:04.240Z] =================================================================================================================== 00:23:44.713 [2024-11-15T10:04:04.240Z] Total : 28419.97 111.02 0.00 0.00 4492.55 2375.68 15947.09 00:23:44.713 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.713 00:23:44.713 Latency(us) 00:23:44.713 [2024-11-15T10:04:04.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.713 [2024-11-15T10:04:04.240Z] =================================================================================================================== 00:23:44.713 [2024-11-15T10:04:04.240Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.713 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.713 rmmod nvme_tcp 00:23:44.713 rmmod nvme_fabrics 00:23:44.713 rmmod nvme_keyring 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 472070 ']' 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 472070 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 472070 ']' 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 472070 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:44.713 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 472070 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 472070' 00:23:44.974 killing process with pid 472070 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 472070 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 472070 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.974 11:04:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.520 00:23:47.520 real 0m13.984s 00:23:47.520 user 0m16.900s 00:23:47.520 sys 0m6.587s 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.520 ************************************ 00:23:47.520 END TEST nvmf_multicontroller 00:23:47.520 ************************************ 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.520 ************************************ 00:23:47.520 START TEST nvmf_aer 00:23:47.520 ************************************ 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.520 * Looking for test storage... 00:23:47.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:47.520 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:47.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.521 --rc genhtml_branch_coverage=1 00:23:47.521 --rc genhtml_function_coverage=1 00:23:47.521 --rc genhtml_legend=1 00:23:47.521 --rc geninfo_all_blocks=1 00:23:47.521 --rc geninfo_unexecuted_blocks=1 00:23:47.521 00:23:47.521 ' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:47.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.521 --rc genhtml_branch_coverage=1 00:23:47.521 --rc genhtml_function_coverage=1 00:23:47.521 --rc genhtml_legend=1 00:23:47.521 --rc geninfo_all_blocks=1 00:23:47.521 --rc geninfo_unexecuted_blocks=1 00:23:47.521 00:23:47.521 ' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:47.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.521 --rc genhtml_branch_coverage=1 00:23:47.521 --rc genhtml_function_coverage=1 00:23:47.521 --rc genhtml_legend=1 00:23:47.521 --rc geninfo_all_blocks=1 00:23:47.521 --rc geninfo_unexecuted_blocks=1 00:23:47.521 00:23:47.521 ' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:47.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.521 --rc genhtml_branch_coverage=1 00:23:47.521 --rc genhtml_function_coverage=1 00:23:47.521 --rc genhtml_legend=1 00:23:47.521 --rc geninfo_all_blocks=1 00:23:47.521 --rc geninfo_unexecuted_blocks=1 00:23:47.521 00:23:47.521 ' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.521 11:04:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:55.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:55.660 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.660 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:55.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:55.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:23:55.661 00:23:55.661 --- 10.0.0.2 ping statistics --- 00:23:55.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.661 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:23:55.661 00:23:55.661 --- 10.0.0.1 ping statistics --- 00:23:55.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.661 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=477100 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 477100 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 477100 ']' 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.661 11:04:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.661 [2024-11-15 11:04:14.445246] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:23:55.661 [2024-11-15 11:04:14.445310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.661 [2024-11-15 11:04:14.543556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.661 [2024-11-15 11:04:14.597572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.661 [2024-11-15 11:04:14.597624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.661 [2024-11-15 11:04:14.597632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.661 [2024-11-15 11:04:14.597644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.661 [2024-11-15 11:04:14.597650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.661 [2024-11-15 11:04:14.599693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.661 [2024-11-15 11:04:14.599974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.661 [2024-11-15 11:04:14.600136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.661 [2024-11-15 11:04:14.600138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 [2024-11-15 11:04:15.321484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 Malloc0 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 [2024-11-15 11:04:15.392014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 [ 00:23:55.923 { 00:23:55.923 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.923 "subtype": "Discovery", 00:23:55.923 "listen_addresses": [], 00:23:55.923 "allow_any_host": true, 00:23:55.923 "hosts": [] 00:23:55.923 }, 00:23:55.923 { 00:23:55.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.923 "subtype": "NVMe", 00:23:55.923 "listen_addresses": [ 00:23:55.923 { 00:23:55.923 "trtype": "TCP", 00:23:55.923 "adrfam": "IPv4", 00:23:55.923 "traddr": "10.0.0.2", 00:23:55.923 "trsvcid": "4420" 00:23:55.923 } 00:23:55.923 ], 00:23:55.923 "allow_any_host": true, 00:23:55.923 "hosts": [], 00:23:55.923 "serial_number": "SPDK00000000000001", 00:23:55.923 "model_number": "SPDK bdev Controller", 00:23:55.923 "max_namespaces": 2, 00:23:55.923 "min_cntlid": 1, 00:23:55.923 "max_cntlid": 65519, 00:23:55.923 "namespaces": [ 00:23:55.923 { 00:23:55.923 "nsid": 1, 00:23:55.923 "bdev_name": "Malloc0", 00:23:55.923 "name": "Malloc0", 00:23:55.923 "nguid": "A0BFDBD40E48403DB340515FD00A5504", 00:23:55.923 "uuid": "a0bfdbd4-0e48-403d-b340-515fd00a5504" 00:23:55.923 } 00:23:55.923 ] 00:23:55.923 } 00:23:55.923 ] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=477186 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:55.923 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.184 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.184 Malloc1 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.185 [ 00:23:56.185 { 00:23:56.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:56.185 "subtype": "Discovery", 00:23:56.185 "listen_addresses": [], 00:23:56.185 "allow_any_host": true, 00:23:56.185 "hosts": [] 00:23:56.185 }, 00:23:56.185 { 00:23:56.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.185 "subtype": "NVMe", 00:23:56.185 "listen_addresses": [ 00:23:56.185 { 00:23:56.185 "trtype": "TCP", 00:23:56.185 "adrfam": "IPv4", 00:23:56.185 "traddr": "10.0.0.2", 00:23:56.185 "trsvcid": "4420" 00:23:56.185 } 00:23:56.185 ], 00:23:56.185 "allow_any_host": true, 00:23:56.185 "hosts": [], 00:23:56.185 "serial_number": "SPDK00000000000001", 00:23:56.185 "model_number": "SPDK bdev Controller", 00:23:56.185 "max_namespaces": 2, 00:23:56.185 "min_cntlid": 1, 00:23:56.185 "max_cntlid": 65519, 00:23:56.185 "namespaces": [ 00:23:56.185 { 00:23:56.185 "nsid": 1, 00:23:56.185 "bdev_name": "Malloc0", 00:23:56.185 "name": "Malloc0", 00:23:56.185 "nguid": "A0BFDBD40E48403DB340515FD00A5504", 00:23:56.185 "uuid": "a0bfdbd4-0e48-403d-b340-515fd00a5504" 00:23:56.185 }, 00:23:56.185 { 00:23:56.185 "nsid": 2, 00:23:56.185 "bdev_name": "Malloc1", 00:23:56.185 "name": "Malloc1", 00:23:56.185 "nguid": "5A6A272B451344C7832E2C334ED2B56C", 00:23:56.185 "uuid": "5a6a272b-4513-44c7-832e-2c334ed2b56c" 00:23:56.185 } 00:23:56.185 ] 00:23:56.185 } 00:23:56.185 ] 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.185 Asynchronous Event Request test 00:23:56.185 Attaching to 10.0.0.2 00:23:56.185 Attached to 10.0.0.2 00:23:56.185 Registering asynchronous event callbacks... 00:23:56.185 Starting namespace attribute notice tests for all controllers... 00:23:56.185 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:56.185 aer_cb - Changed Namespace 00:23:56.185 Cleaning up... 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 477186 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.185 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.446 rmmod nvme_tcp 00:23:56.446 rmmod nvme_fabrics 00:23:56.446 rmmod nvme_keyring 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 477100 ']' 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 477100 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 477100 ']' 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 477100 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 477100 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 477100' 00:23:56.446 killing process with pid 477100 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 477100 00:23:56.446 11:04:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 477100 00:23:56.706 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.706 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.706 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.706 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:56.706 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.707 11:04:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.249 11:04:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.249 00:23:59.249 real 0m11.576s 00:23:59.249 user 0m8.123s 00:23:59.249 sys 0m6.181s 00:23:59.249 11:04:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:59.249 11:04:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.249 ************************************ 00:23:59.249 END TEST nvmf_aer 00:23:59.249 ************************************ 00:23:59.249 11:04:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:59.249 11:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:59.249 11:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.250 ************************************ 00:23:59.250 START TEST nvmf_async_init 00:23:59.250 ************************************ 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:59.250 * Looking for test storage... 00:23:59.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:59.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.250 --rc genhtml_branch_coverage=1 00:23:59.250 --rc genhtml_function_coverage=1 00:23:59.250 --rc genhtml_legend=1 00:23:59.250 --rc geninfo_all_blocks=1 00:23:59.250 --rc geninfo_unexecuted_blocks=1 00:23:59.250 00:23:59.250 ' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:59.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.250 --rc genhtml_branch_coverage=1 00:23:59.250 --rc genhtml_function_coverage=1 00:23:59.250 --rc genhtml_legend=1 00:23:59.250 --rc geninfo_all_blocks=1 00:23:59.250 --rc geninfo_unexecuted_blocks=1 00:23:59.250 00:23:59.250 ' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:59.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.250 --rc genhtml_branch_coverage=1 00:23:59.250 --rc genhtml_function_coverage=1 00:23:59.250 --rc genhtml_legend=1 00:23:59.250 --rc geninfo_all_blocks=1 00:23:59.250 --rc geninfo_unexecuted_blocks=1 00:23:59.250 00:23:59.250 ' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:59.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.250 --rc genhtml_branch_coverage=1 00:23:59.250 --rc genhtml_function_coverage=1 00:23:59.250 --rc genhtml_legend=1 00:23:59.250 --rc geninfo_all_blocks=1 00:23:59.250 --rc geninfo_unexecuted_blocks=1 00:23:59.250 00:23:59.250 ' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.250 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6d037b0d512f4718b7f8a811ce2ec0fc 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.251 11:04:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:07.385 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:07.385 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.385 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:07.386 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:07.386 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:07.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.728 ms 00:24:07.386 00:24:07.386 --- 10.0.0.2 ping statistics --- 00:24:07.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.386 rtt min/avg/max/mdev = 0.728/0.728/0.728/0.000 ms 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:24:07.386 00:24:07.386 --- 10.0.0.1 ping statistics --- 00:24:07.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.386 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:07.386 11:04:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=481471 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 481471 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 481471 ']' 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.386 [2024-11-15 11:04:26.096659] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:24:07.386 [2024-11-15 11:04:26.096725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.386 [2024-11-15 11:04:26.195750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.386 [2024-11-15 11:04:26.247792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.386 [2024-11-15 11:04:26.247841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.386 [2024-11-15 11:04:26.247850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.386 [2024-11-15 11:04:26.247857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.386 [2024-11-15 11:04:26.247864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.386 [2024-11-15 11:04:26.248644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.386 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 [2024-11-15 11:04:26.956047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 null0 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6d037b0d512f4718b7f8a811ce2ec0fc 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.647 [2024-11-15 11:04:27.016437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.647 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.908 nvme0n1 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.908 [ 00:24:07.908 { 00:24:07.908 "name": "nvme0n1", 00:24:07.908 "aliases": [ 00:24:07.908 "6d037b0d-512f-4718-b7f8-a811ce2ec0fc" 00:24:07.908 ], 00:24:07.908 "product_name": "NVMe disk", 00:24:07.908 "block_size": 512, 00:24:07.908 "num_blocks": 2097152, 00:24:07.908 "uuid": "6d037b0d-512f-4718-b7f8-a811ce2ec0fc", 00:24:07.908 "numa_id": 0, 00:24:07.908 "assigned_rate_limits": { 00:24:07.908 "rw_ios_per_sec": 0, 00:24:07.908 "rw_mbytes_per_sec": 0, 00:24:07.908 "r_mbytes_per_sec": 0, 00:24:07.908 "w_mbytes_per_sec": 0 00:24:07.908 }, 00:24:07.908 "claimed": false, 00:24:07.908 "zoned": false, 00:24:07.908 "supported_io_types": { 00:24:07.908 "read": true, 00:24:07.908 "write": true, 00:24:07.908 "unmap": false, 00:24:07.908 "flush": true, 00:24:07.908 "reset": true, 00:24:07.908 "nvme_admin": true, 00:24:07.908 "nvme_io": true, 00:24:07.908 "nvme_io_md": false, 00:24:07.908 "write_zeroes": true, 00:24:07.908 "zcopy": false, 00:24:07.908 "get_zone_info": false, 00:24:07.908 "zone_management": false, 00:24:07.908 "zone_append": false, 00:24:07.908 "compare": true, 00:24:07.908 "compare_and_write": true, 00:24:07.908 "abort": true, 00:24:07.908 "seek_hole": false, 00:24:07.908 "seek_data": false, 00:24:07.908 "copy": true, 00:24:07.908 "nvme_iov_md": false 00:24:07.908 }, 00:24:07.908 "memory_domains": [ 00:24:07.908 { 00:24:07.908 "dma_device_id": "system", 00:24:07.908 "dma_device_type": 1 00:24:07.908 } 00:24:07.908 ], 00:24:07.908 "driver_specific": { 00:24:07.908 "nvme": [ 00:24:07.908 { 00:24:07.908 "trid": { 00:24:07.908 "trtype": "TCP", 00:24:07.908 "adrfam": "IPv4", 00:24:07.908 "traddr": "10.0.0.2", 00:24:07.908 "trsvcid": "4420", 00:24:07.908 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:07.908 }, 00:24:07.908 "ctrlr_data": { 00:24:07.908 "cntlid": 1, 00:24:07.908 "vendor_id": "0x8086", 00:24:07.908 "model_number": "SPDK bdev Controller", 00:24:07.908 "serial_number": "00000000000000000000", 00:24:07.908 "firmware_revision": "25.01", 00:24:07.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.908 "oacs": { 00:24:07.908 "security": 0, 00:24:07.908 "format": 0, 00:24:07.908 "firmware": 0, 00:24:07.908 "ns_manage": 0 00:24:07.908 }, 00:24:07.908 "multi_ctrlr": true, 00:24:07.908 "ana_reporting": false 00:24:07.908 }, 00:24:07.908 "vs": { 00:24:07.908 "nvme_version": "1.3" 00:24:07.908 }, 00:24:07.908 "ns_data": { 00:24:07.908 "id": 1, 00:24:07.908 "can_share": true 00:24:07.908 } 00:24:07.908 } 00:24:07.908 ], 00:24:07.908 "mp_policy": "active_passive" 00:24:07.908 } 00:24:07.908 } 00:24:07.908 ] 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.908 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.908 [2024-11-15 11:04:27.292903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:07.908 [2024-11-15 11:04:27.292986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78ce0 (9): Bad file descriptor 00:24:07.909 [2024-11-15 11:04:27.424665] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:07.909 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.909 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.909 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.909 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.169 [ 00:24:08.169 { 00:24:08.169 "name": "nvme0n1", 00:24:08.169 "aliases": [ 00:24:08.169 "6d037b0d-512f-4718-b7f8-a811ce2ec0fc" 00:24:08.169 ], 00:24:08.169 "product_name": "NVMe disk", 00:24:08.169 "block_size": 512, 00:24:08.169 "num_blocks": 2097152, 00:24:08.169 "uuid": "6d037b0d-512f-4718-b7f8-a811ce2ec0fc", 00:24:08.169 "numa_id": 0, 00:24:08.169 "assigned_rate_limits": { 00:24:08.169 "rw_ios_per_sec": 0, 00:24:08.169 "rw_mbytes_per_sec": 0, 00:24:08.169 "r_mbytes_per_sec": 0, 00:24:08.169 "w_mbytes_per_sec": 0 00:24:08.169 }, 00:24:08.169 "claimed": false, 00:24:08.169 "zoned": false, 00:24:08.169 "supported_io_types": { 00:24:08.169 "read": true, 00:24:08.169 "write": true, 00:24:08.169 "unmap": false, 00:24:08.169 "flush": true, 00:24:08.169 "reset": true, 00:24:08.169 "nvme_admin": true, 00:24:08.169 "nvme_io": true, 00:24:08.169 "nvme_io_md": false, 00:24:08.169 "write_zeroes": true, 00:24:08.169 "zcopy": false, 00:24:08.169 "get_zone_info": false, 00:24:08.169 "zone_management": false, 00:24:08.169 "zone_append": false, 00:24:08.169 "compare": true, 00:24:08.169 "compare_and_write": true, 00:24:08.169 "abort": true, 00:24:08.169 "seek_hole": false, 00:24:08.169 "seek_data": false, 00:24:08.169 "copy": true, 00:24:08.169 "nvme_iov_md": false 00:24:08.169 }, 00:24:08.169 "memory_domains": [ 00:24:08.169 { 00:24:08.169 "dma_device_id": "system", 00:24:08.169 "dma_device_type": 1 00:24:08.169 } 00:24:08.169 ], 00:24:08.169 "driver_specific": { 00:24:08.169 "nvme": [ 00:24:08.169 { 00:24:08.169 "trid": { 00:24:08.169 "trtype": "TCP", 00:24:08.169 "adrfam": "IPv4", 00:24:08.169 "traddr": "10.0.0.2", 00:24:08.169 "trsvcid": "4420", 00:24:08.169 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:08.169 }, 00:24:08.169 "ctrlr_data": { 00:24:08.169 "cntlid": 2, 00:24:08.169 "vendor_id": "0x8086", 00:24:08.169 "model_number": "SPDK bdev Controller", 00:24:08.169 "serial_number": "00000000000000000000", 00:24:08.169 "firmware_revision": "25.01", 00:24:08.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.169 "oacs": { 00:24:08.169 "security": 0, 00:24:08.169 "format": 0, 00:24:08.169 "firmware": 0, 00:24:08.169 "ns_manage": 0 00:24:08.169 }, 00:24:08.169 "multi_ctrlr": true, 00:24:08.169 "ana_reporting": false 00:24:08.169 }, 00:24:08.169 "vs": { 00:24:08.169 "nvme_version": "1.3" 00:24:08.169 }, 00:24:08.169 "ns_data": { 00:24:08.169 "id": 1, 00:24:08.169 "can_share": true 00:24:08.169 } 00:24:08.169 } 00:24:08.169 ], 00:24:08.169 "mp_policy": "active_passive" 00:24:08.169 } 00:24:08.169 } 00:24:08.169 ] 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.MD77XXMWHc 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.MD77XXMWHc 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.MD77XXMWHc 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.169 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.169 [2024-11-15 11:04:27.513568] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.169 [2024-11-15 11:04:27.513736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.170 [2024-11-15 11:04:27.537640] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.170 nvme0n1 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.170 [ 00:24:08.170 { 00:24:08.170 "name": "nvme0n1", 00:24:08.170 "aliases": [ 00:24:08.170 "6d037b0d-512f-4718-b7f8-a811ce2ec0fc" 00:24:08.170 ], 00:24:08.170 "product_name": "NVMe disk", 00:24:08.170 "block_size": 512, 00:24:08.170 "num_blocks": 2097152, 00:24:08.170 "uuid": "6d037b0d-512f-4718-b7f8-a811ce2ec0fc", 00:24:08.170 "numa_id": 0, 00:24:08.170 "assigned_rate_limits": { 00:24:08.170 "rw_ios_per_sec": 0, 00:24:08.170 "rw_mbytes_per_sec": 0, 00:24:08.170 "r_mbytes_per_sec": 0, 00:24:08.170 "w_mbytes_per_sec": 0 00:24:08.170 }, 00:24:08.170 "claimed": false, 00:24:08.170 "zoned": false, 00:24:08.170 "supported_io_types": { 00:24:08.170 "read": true, 00:24:08.170 "write": true, 00:24:08.170 "unmap": false, 00:24:08.170 "flush": true, 00:24:08.170 "reset": true, 00:24:08.170 "nvme_admin": true, 00:24:08.170 "nvme_io": true, 00:24:08.170 "nvme_io_md": false, 00:24:08.170 "write_zeroes": true, 00:24:08.170 "zcopy": false, 00:24:08.170 "get_zone_info": false, 00:24:08.170 "zone_management": false, 00:24:08.170 "zone_append": false, 00:24:08.170 "compare": true, 00:24:08.170 "compare_and_write": true, 00:24:08.170 "abort": true, 00:24:08.170 "seek_hole": false, 00:24:08.170 "seek_data": false, 00:24:08.170 "copy": true, 00:24:08.170 "nvme_iov_md": false 00:24:08.170 }, 00:24:08.170 "memory_domains": [ 00:24:08.170 { 00:24:08.170 "dma_device_id": "system", 00:24:08.170 "dma_device_type": 1 00:24:08.170 } 00:24:08.170 ], 00:24:08.170 "driver_specific": { 00:24:08.170 "nvme": [ 00:24:08.170 { 00:24:08.170 "trid": { 00:24:08.170 "trtype": "TCP", 00:24:08.170 "adrfam": "IPv4", 00:24:08.170 "traddr": "10.0.0.2", 00:24:08.170 "trsvcid": "4421", 00:24:08.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:08.170 }, 00:24:08.170 "ctrlr_data": { 00:24:08.170 "cntlid": 3, 00:24:08.170 "vendor_id": "0x8086", 00:24:08.170 "model_number": "SPDK bdev Controller", 00:24:08.170 "serial_number": "00000000000000000000", 00:24:08.170 "firmware_revision": "25.01", 00:24:08.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.170 "oacs": { 00:24:08.170 "security": 0, 00:24:08.170 "format": 0, 00:24:08.170 "firmware": 0, 00:24:08.170 "ns_manage": 0 00:24:08.170 }, 00:24:08.170 "multi_ctrlr": true, 00:24:08.170 "ana_reporting": false 00:24:08.170 }, 00:24:08.170 "vs": { 00:24:08.170 "nvme_version": "1.3" 00:24:08.170 }, 00:24:08.170 "ns_data": { 00:24:08.170 "id": 1, 00:24:08.170 "can_share": true 00:24:08.170 } 00:24:08.170 } 00:24:08.170 ], 00:24:08.170 "mp_policy": "active_passive" 00:24:08.170 } 00:24:08.170 } 00:24:08.170 ] 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.MD77XXMWHc 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.170 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.170 rmmod nvme_tcp 00:24:08.170 rmmod nvme_fabrics 00:24:08.431 rmmod nvme_keyring 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 481471 ']' 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 481471 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 481471 ']' 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 481471 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 481471 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 481471' 00:24:08.431 killing process with pid 481471 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 481471 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 481471 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.431 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.432 11:04:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.982 00:24:10.982 real 0m11.799s 00:24:10.982 user 0m4.204s 00:24:10.982 sys 0m6.181s 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.982 ************************************ 00:24:10.982 END TEST nvmf_async_init 00:24:10.982 ************************************ 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.982 ************************************ 00:24:10.982 START TEST dma 00:24:10.982 ************************************ 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:10.982 * Looking for test storage... 00:24:10.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:10.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.982 --rc genhtml_branch_coverage=1 00:24:10.982 --rc genhtml_function_coverage=1 00:24:10.982 --rc genhtml_legend=1 00:24:10.982 --rc geninfo_all_blocks=1 00:24:10.982 --rc geninfo_unexecuted_blocks=1 00:24:10.982 00:24:10.982 ' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:10.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.982 --rc genhtml_branch_coverage=1 00:24:10.982 --rc genhtml_function_coverage=1 00:24:10.982 --rc genhtml_legend=1 00:24:10.982 --rc geninfo_all_blocks=1 00:24:10.982 --rc geninfo_unexecuted_blocks=1 00:24:10.982 00:24:10.982 ' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:10.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.982 --rc genhtml_branch_coverage=1 00:24:10.982 --rc genhtml_function_coverage=1 00:24:10.982 --rc genhtml_legend=1 00:24:10.982 --rc geninfo_all_blocks=1 00:24:10.982 --rc geninfo_unexecuted_blocks=1 00:24:10.982 00:24:10.982 ' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:10.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.982 --rc genhtml_branch_coverage=1 00:24:10.982 --rc genhtml_function_coverage=1 00:24:10.982 --rc genhtml_legend=1 00:24:10.982 --rc geninfo_all_blocks=1 00:24:10.982 --rc geninfo_unexecuted_blocks=1 00:24:10.982 00:24:10.982 ' 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.982 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:10.983 00:24:10.983 real 0m0.241s 00:24:10.983 user 0m0.142s 00:24:10.983 sys 0m0.114s 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:10.983 ************************************ 00:24:10.983 END TEST dma 00:24:10.983 ************************************ 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.983 ************************************ 00:24:10.983 START TEST nvmf_identify 00:24:10.983 ************************************ 00:24:10.983 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:11.246 * Looking for test storage... 00:24:11.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.246 --rc genhtml_branch_coverage=1 00:24:11.246 --rc genhtml_function_coverage=1 00:24:11.246 --rc genhtml_legend=1 00:24:11.246 --rc geninfo_all_blocks=1 00:24:11.246 --rc geninfo_unexecuted_blocks=1 00:24:11.246 00:24:11.246 ' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.246 --rc genhtml_branch_coverage=1 00:24:11.246 --rc genhtml_function_coverage=1 00:24:11.246 --rc genhtml_legend=1 00:24:11.246 --rc geninfo_all_blocks=1 00:24:11.246 --rc geninfo_unexecuted_blocks=1 00:24:11.246 00:24:11.246 ' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.246 --rc genhtml_branch_coverage=1 00:24:11.246 --rc genhtml_function_coverage=1 00:24:11.246 --rc genhtml_legend=1 00:24:11.246 --rc geninfo_all_blocks=1 00:24:11.246 --rc geninfo_unexecuted_blocks=1 00:24:11.246 00:24:11.246 ' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:11.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.246 --rc genhtml_branch_coverage=1 00:24:11.246 --rc genhtml_function_coverage=1 00:24:11.246 --rc genhtml_legend=1 00:24:11.246 --rc geninfo_all_blocks=1 00:24:11.246 --rc geninfo_unexecuted_blocks=1 00:24:11.246 00:24:11.246 ' 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.246 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.247 11:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:19.392 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:19.392 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:19.392 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:19.392 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.392 11:04:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.392 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.392 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.392 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.392 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.392 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.392 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:24:19.393 00:24:19.393 --- 10.0.0.2 ping statistics --- 00:24:19.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.393 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:24:19.393 00:24:19.393 --- 10.0.0.1 ping statistics --- 00:24:19.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.393 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=486193 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 486193 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 486193 ']' 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:19.393 11:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.393 [2024-11-15 11:04:38.295933] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:24:19.393 [2024-11-15 11:04:38.296006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.393 [2024-11-15 11:04:38.380060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.393 [2024-11-15 11:04:38.434461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.393 [2024-11-15 11:04:38.434511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.393 [2024-11-15 11:04:38.434520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.393 [2024-11-15 11:04:38.434528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.393 [2024-11-15 11:04:38.434534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.393 [2024-11-15 11:04:38.436608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.393 [2024-11-15 11:04:38.436704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.393 [2024-11-15 11:04:38.436866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.393 [2024-11-15 11:04:38.436867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.654 [2024-11-15 11:04:39.122390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.654 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.916 Malloc0 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.916 [2024-11-15 11:04:39.242497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.916 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:19.917 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.917 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.917 [ 00:24:19.917 { 00:24:19.917 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:19.917 "subtype": "Discovery", 00:24:19.917 "listen_addresses": [ 00:24:19.917 { 00:24:19.917 "trtype": "TCP", 00:24:19.917 "adrfam": "IPv4", 00:24:19.917 "traddr": "10.0.0.2", 00:24:19.917 "trsvcid": "4420" 00:24:19.917 } 00:24:19.917 ], 00:24:19.917 "allow_any_host": true, 00:24:19.917 "hosts": [] 00:24:19.917 }, 00:24:19.917 { 00:24:19.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.917 "subtype": "NVMe", 00:24:19.917 "listen_addresses": [ 00:24:19.917 { 00:24:19.917 "trtype": "TCP", 00:24:19.917 "adrfam": "IPv4", 00:24:19.917 "traddr": "10.0.0.2", 00:24:19.917 "trsvcid": "4420" 00:24:19.917 } 00:24:19.917 ], 00:24:19.917 "allow_any_host": true, 00:24:19.917 "hosts": [], 00:24:19.917 "serial_number": "SPDK00000000000001", 00:24:19.917 "model_number": "SPDK bdev Controller", 00:24:19.917 "max_namespaces": 32, 00:24:19.917 "min_cntlid": 1, 00:24:19.917 "max_cntlid": 65519, 00:24:19.917 "namespaces": [ 00:24:19.917 { 00:24:19.917 "nsid": 1, 00:24:19.917 "bdev_name": "Malloc0", 00:24:19.917 "name": "Malloc0", 00:24:19.917 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:19.917 "eui64": "ABCDEF0123456789", 00:24:19.917 "uuid": "c7da1bd5-b03c-4022-8055-ab4ad68c6267" 00:24:19.917 } 00:24:19.917 ] 00:24:19.917 } 00:24:19.917 ] 00:24:19.917 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.917 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:19.917 [2024-11-15 11:04:39.305574] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:24:19.917 [2024-11-15 11:04:39.305622] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486424 ] 00:24:19.917 [2024-11-15 11:04:39.363316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:19.917 [2024-11-15 11:04:39.363386] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:19.917 [2024-11-15 11:04:39.363392] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:19.917 [2024-11-15 11:04:39.363408] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:19.917 [2024-11-15 11:04:39.363421] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:19.917 [2024-11-15 11:04:39.364289] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:19.917 [2024-11-15 11:04:39.364337] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xefa690 0 00:24:19.917 [2024-11-15 11:04:39.370585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:19.917 [2024-11-15 11:04:39.370602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:19.917 [2024-11-15 11:04:39.370607] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:19.917 [2024-11-15 11:04:39.370611] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:19.917 [2024-11-15 11:04:39.370653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.370659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.370664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.917 [2024-11-15 11:04:39.370680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:19.917 [2024-11-15 11:04:39.370703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.917 [2024-11-15 11:04:39.378578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.917 [2024-11-15 11:04:39.378589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.917 [2024-11-15 11:04:39.378593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.378598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.917 [2024-11-15 11:04:39.378608] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:19.917 [2024-11-15 11:04:39.378616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:19.917 [2024-11-15 11:04:39.378622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:19.917 [2024-11-15 11:04:39.378637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.378641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.378645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.917 [2024-11-15 11:04:39.378654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.917 [2024-11-15 11:04:39.378669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.917 [2024-11-15 11:04:39.378913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.917 [2024-11-15 11:04:39.378920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.917 [2024-11-15 11:04:39.378924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.378928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.917 [2024-11-15 11:04:39.378933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:19.917 [2024-11-15 11:04:39.378941] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:19.917 [2024-11-15 11:04:39.378949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.378953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.378956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.917 [2024-11-15 11:04:39.378963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.917 [2024-11-15 11:04:39.378975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.917 [2024-11-15 11:04:39.379174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.917 [2024-11-15 11:04:39.379180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.917 [2024-11-15 11:04:39.379184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.379188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.917 [2024-11-15 11:04:39.379198] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:19.917 [2024-11-15 11:04:39.379207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:19.917 [2024-11-15 11:04:39.379214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.379217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.379221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.917 [2024-11-15 11:04:39.379228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.917 [2024-11-15 11:04:39.379239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.917 [2024-11-15 11:04:39.379438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.917 [2024-11-15 11:04:39.379445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.917 [2024-11-15 11:04:39.379449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.379452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.917 [2024-11-15 11:04:39.379458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:19.917 [2024-11-15 11:04:39.379467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.379471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.917 [2024-11-15 11:04:39.379475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.917 [2024-11-15 11:04:39.379482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.917 [2024-11-15 11:04:39.379493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.917 [2024-11-15 11:04:39.379709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.917 [2024-11-15 11:04:39.379716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.917 [2024-11-15 11:04:39.379719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.379723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.918 [2024-11-15 11:04:39.379728] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:19.918 [2024-11-15 11:04:39.379733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:19.918 [2024-11-15 11:04:39.379741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:19.918 [2024-11-15 11:04:39.379851] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:19.918 [2024-11-15 11:04:39.379855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:19.918 [2024-11-15 11:04:39.379865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.379869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.379873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.379879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.918 [2024-11-15 11:04:39.379891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.918 [2024-11-15 11:04:39.380084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.918 [2024-11-15 11:04:39.380096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.918 [2024-11-15 11:04:39.380099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.918 [2024-11-15 11:04:39.380108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:19.918 [2024-11-15 11:04:39.380118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.380133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.918 [2024-11-15 11:04:39.380143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.918 [2024-11-15 11:04:39.380370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.918 [2024-11-15 11:04:39.380376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.918 [2024-11-15 11:04:39.380380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.918 [2024-11-15 11:04:39.380388] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:19.918 [2024-11-15 11:04:39.380393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:19.918 [2024-11-15 11:04:39.380401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:19.918 [2024-11-15 11:04:39.380410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:19.918 [2024-11-15 11:04:39.380419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.380430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.918 [2024-11-15 11:04:39.380441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.918 [2024-11-15 11:04:39.380692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.918 [2024-11-15 11:04:39.380699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.918 [2024-11-15 11:04:39.380703] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380707] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xefa690): datao=0, datal=4096, cccid=0 00:24:19.918 [2024-11-15 11:04:39.380712] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5c100) on tqpair(0xefa690): expected_datao=0, payload_size=4096 00:24:19.918 [2024-11-15 11:04:39.380717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380730] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.380735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.421767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.918 [2024-11-15 11:04:39.421780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.918 [2024-11-15 11:04:39.421784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.421789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.918 [2024-11-15 11:04:39.421799] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:19.918 [2024-11-15 11:04:39.421809] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:19.918 [2024-11-15 11:04:39.421813] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:19.918 [2024-11-15 11:04:39.421823] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:19.918 [2024-11-15 11:04:39.421828] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:19.918 [2024-11-15 11:04:39.421833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:19.918 [2024-11-15 11:04:39.421845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:19.918 [2024-11-15 11:04:39.421853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.421857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.421861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.421870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:19.918 [2024-11-15 11:04:39.421885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.918 [2024-11-15 11:04:39.422011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.918 [2024-11-15 11:04:39.422018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.918 [2024-11-15 11:04:39.422021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:19.918 [2024-11-15 11:04:39.422033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.422047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.918 [2024-11-15 11:04:39.422053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.422067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.918 [2024-11-15 11:04:39.422073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.422086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.918 [2024-11-15 11:04:39.422092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:19.918 [2024-11-15 11:04:39.422105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.918 [2024-11-15 11:04:39.422110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:19.918 [2024-11-15 11:04:39.422121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:19.918 [2024-11-15 11:04:39.422128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.918 [2024-11-15 11:04:39.422132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xefa690) 00:24:19.919 [2024-11-15 11:04:39.422139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-11-15 11:04:39.422152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c100, cid 0, qid 0 00:24:19.919 [2024-11-15 11:04:39.422157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c280, cid 1, qid 0 00:24:19.919 [2024-11-15 11:04:39.422162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c400, cid 2, qid 0 00:24:19.919 [2024-11-15 11:04:39.422167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:19.919 [2024-11-15 11:04:39.422171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c700, cid 4, qid 0 00:24:19.919 [2024-11-15 11:04:39.422428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.919 [2024-11-15 11:04:39.422435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.919 [2024-11-15 11:04:39.422439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c700) on tqpair=0xefa690 00:24:19.919 [2024-11-15 11:04:39.422451] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:19.919 [2024-11-15 11:04:39.422457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:19.919 [2024-11-15 11:04:39.422469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xefa690) 00:24:19.919 [2024-11-15 11:04:39.422479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-11-15 11:04:39.422490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c700, cid 4, qid 0 00:24:19.919 [2024-11-15 11:04:39.422677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.919 [2024-11-15 11:04:39.422684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.919 [2024-11-15 11:04:39.422688] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422691] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xefa690): datao=0, datal=4096, cccid=4 00:24:19.919 [2024-11-15 11:04:39.422696] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5c700) on tqpair(0xefa690): expected_datao=0, payload_size=4096 00:24:19.919 [2024-11-15 11:04:39.422700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422722] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.919 [2024-11-15 11:04:39.422898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.919 [2024-11-15 11:04:39.422901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c700) on tqpair=0xefa690 00:24:19.919 [2024-11-15 11:04:39.422920] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:19.919 [2024-11-15 11:04:39.422945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xefa690) 00:24:19.919 [2024-11-15 11:04:39.422959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-11-15 11:04:39.422966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.422973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xefa690) 00:24:19.919 [2024-11-15 11:04:39.422980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.919 [2024-11-15 11:04:39.422994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c700, cid 4, qid 0 00:24:19.919 [2024-11-15 11:04:39.422999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c880, cid 5, qid 0 00:24:19.919 [2024-11-15 11:04:39.423246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.919 [2024-11-15 11:04:39.423252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.919 [2024-11-15 11:04:39.423256] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.423259] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xefa690): datao=0, datal=1024, cccid=4 00:24:19.919 [2024-11-15 11:04:39.423264] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5c700) on tqpair(0xefa690): expected_datao=0, payload_size=1024 00:24:19.919 [2024-11-15 11:04:39.423268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.423275] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.423278] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.423284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.919 [2024-11-15 11:04:39.423290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.919 [2024-11-15 11:04:39.423293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.919 [2024-11-15 11:04:39.423297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c880) on tqpair=0xefa690 00:24:20.185 [2024-11-15 11:04:39.463758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.185 [2024-11-15 11:04:39.463773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.185 [2024-11-15 11:04:39.463776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.463781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c700) on tqpair=0xefa690 00:24:20.185 [2024-11-15 11:04:39.463794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.463799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xefa690) 00:24:20.185 [2024-11-15 11:04:39.463806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.185 [2024-11-15 11:04:39.463822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c700, cid 4, qid 0 00:24:20.185 [2024-11-15 11:04:39.464067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.185 [2024-11-15 11:04:39.464075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.185 [2024-11-15 11:04:39.464079] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.464083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xefa690): datao=0, datal=3072, cccid=4 00:24:20.185 [2024-11-15 11:04:39.464087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5c700) on tqpair(0xefa690): expected_datao=0, payload_size=3072 00:24:20.185 [2024-11-15 11:04:39.464092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.464108] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.464113] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.506581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.185 [2024-11-15 11:04:39.506599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.185 [2024-11-15 11:04:39.506603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.506607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c700) on tqpair=0xefa690 00:24:20.185 [2024-11-15 11:04:39.506618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.506622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xefa690) 00:24:20.185 [2024-11-15 11:04:39.506629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.185 [2024-11-15 11:04:39.506646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c700, cid 4, qid 0 00:24:20.185 [2024-11-15 11:04:39.506800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.185 [2024-11-15 11:04:39.506806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.185 [2024-11-15 11:04:39.506809] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.506815] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xefa690): datao=0, datal=8, cccid=4 00:24:20.185 [2024-11-15 11:04:39.506820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5c700) on tqpair(0xefa690): expected_datao=0, payload_size=8 00:24:20.185 [2024-11-15 11:04:39.506825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.506832] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.506836] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.547751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.185 [2024-11-15 11:04:39.547763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.185 [2024-11-15 11:04:39.547767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.185 [2024-11-15 11:04:39.547771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c700) on tqpair=0xefa690 00:24:20.185 ===================================================== 00:24:20.185 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:20.185 ===================================================== 00:24:20.185 Controller Capabilities/Features 00:24:20.185 ================================ 00:24:20.185 Vendor ID: 0000 00:24:20.185 Subsystem Vendor ID: 0000 00:24:20.185 Serial Number: .................... 00:24:20.185 Model Number: ........................................ 00:24:20.185 Firmware Version: 25.01 00:24:20.185 Recommended Arb Burst: 0 00:24:20.185 IEEE OUI Identifier: 00 00 00 00:24:20.185 Multi-path I/O 00:24:20.185 May have multiple subsystem ports: No 00:24:20.185 May have multiple controllers: No 00:24:20.185 Associated with SR-IOV VF: No 00:24:20.185 Max Data Transfer Size: 131072 00:24:20.185 Max Number of Namespaces: 0 00:24:20.185 Max Number of I/O Queues: 1024 00:24:20.185 NVMe Specification Version (VS): 1.3 00:24:20.185 NVMe Specification Version (Identify): 1.3 00:24:20.185 Maximum Queue Entries: 128 00:24:20.185 Contiguous Queues Required: Yes 00:24:20.185 Arbitration Mechanisms Supported 00:24:20.185 Weighted Round Robin: Not Supported 00:24:20.185 Vendor Specific: Not Supported 00:24:20.185 Reset Timeout: 15000 ms 00:24:20.185 Doorbell Stride: 4 bytes 00:24:20.185 NVM Subsystem Reset: Not Supported 00:24:20.185 Command Sets Supported 00:24:20.185 NVM Command Set: Supported 00:24:20.185 Boot Partition: Not Supported 00:24:20.185 Memory Page Size Minimum: 4096 bytes 00:24:20.185 Memory Page Size Maximum: 4096 bytes 00:24:20.185 Persistent Memory Region: Not Supported 00:24:20.185 Optional Asynchronous Events Supported 00:24:20.185 Namespace Attribute Notices: Not Supported 00:24:20.185 Firmware Activation Notices: Not Supported 00:24:20.185 ANA Change Notices: Not Supported 00:24:20.185 PLE Aggregate Log Change Notices: Not Supported 00:24:20.185 LBA Status Info Alert Notices: Not Supported 00:24:20.185 EGE Aggregate Log Change Notices: Not Supported 00:24:20.185 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.185 Zone Descriptor Change Notices: Not Supported 00:24:20.185 Discovery Log Change Notices: Supported 00:24:20.185 Controller Attributes 00:24:20.185 128-bit Host Identifier: Not Supported 00:24:20.185 Non-Operational Permissive Mode: Not Supported 00:24:20.185 NVM Sets: Not Supported 00:24:20.185 Read Recovery Levels: Not Supported 00:24:20.185 Endurance Groups: Not Supported 00:24:20.185 Predictable Latency Mode: Not Supported 00:24:20.185 Traffic Based Keep ALive: Not Supported 00:24:20.185 Namespace Granularity: Not Supported 00:24:20.185 SQ Associations: Not Supported 00:24:20.185 UUID List: Not Supported 00:24:20.185 Multi-Domain Subsystem: Not Supported 00:24:20.185 Fixed Capacity Management: Not Supported 00:24:20.185 Variable Capacity Management: Not Supported 00:24:20.185 Delete Endurance Group: Not Supported 00:24:20.185 Delete NVM Set: Not Supported 00:24:20.185 Extended LBA Formats Supported: Not Supported 00:24:20.185 Flexible Data Placement Supported: Not Supported 00:24:20.185 00:24:20.185 Controller Memory Buffer Support 00:24:20.185 ================================ 00:24:20.185 Supported: No 00:24:20.185 00:24:20.185 Persistent Memory Region Support 00:24:20.185 ================================ 00:24:20.185 Supported: No 00:24:20.185 00:24:20.185 Admin Command Set Attributes 00:24:20.185 ============================ 00:24:20.185 Security Send/Receive: Not Supported 00:24:20.185 Format NVM: Not Supported 00:24:20.185 Firmware Activate/Download: Not Supported 00:24:20.185 Namespace Management: Not Supported 00:24:20.185 Device Self-Test: Not Supported 00:24:20.185 Directives: Not Supported 00:24:20.185 NVMe-MI: Not Supported 00:24:20.185 Virtualization Management: Not Supported 00:24:20.185 Doorbell Buffer Config: Not Supported 00:24:20.185 Get LBA Status Capability: Not Supported 00:24:20.185 Command & Feature Lockdown Capability: Not Supported 00:24:20.185 Abort Command Limit: 1 00:24:20.185 Async Event Request Limit: 4 00:24:20.185 Number of Firmware Slots: N/A 00:24:20.185 Firmware Slot 1 Read-Only: N/A 00:24:20.185 Firmware Activation Without Reset: N/A 00:24:20.185 Multiple Update Detection Support: N/A 00:24:20.185 Firmware Update Granularity: No Information Provided 00:24:20.185 Per-Namespace SMART Log: No 00:24:20.186 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.186 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:20.186 Command Effects Log Page: Not Supported 00:24:20.186 Get Log Page Extended Data: Supported 00:24:20.186 Telemetry Log Pages: Not Supported 00:24:20.186 Persistent Event Log Pages: Not Supported 00:24:20.186 Supported Log Pages Log Page: May Support 00:24:20.186 Commands Supported & Effects Log Page: Not Supported 00:24:20.186 Feature Identifiers & Effects Log Page:May Support 00:24:20.186 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.186 Data Area 4 for Telemetry Log: Not Supported 00:24:20.186 Error Log Page Entries Supported: 128 00:24:20.186 Keep Alive: Not Supported 00:24:20.186 00:24:20.186 NVM Command Set Attributes 00:24:20.186 ========================== 00:24:20.186 Submission Queue Entry Size 00:24:20.186 Max: 1 00:24:20.186 Min: 1 00:24:20.186 Completion Queue Entry Size 00:24:20.186 Max: 1 00:24:20.186 Min: 1 00:24:20.186 Number of Namespaces: 0 00:24:20.186 Compare Command: Not Supported 00:24:20.186 Write Uncorrectable Command: Not Supported 00:24:20.186 Dataset Management Command: Not Supported 00:24:20.186 Write Zeroes Command: Not Supported 00:24:20.186 Set Features Save Field: Not Supported 00:24:20.186 Reservations: Not Supported 00:24:20.186 Timestamp: Not Supported 00:24:20.186 Copy: Not Supported 00:24:20.186 Volatile Write Cache: Not Present 00:24:20.186 Atomic Write Unit (Normal): 1 00:24:20.186 Atomic Write Unit (PFail): 1 00:24:20.186 Atomic Compare & Write Unit: 1 00:24:20.186 Fused Compare & Write: Supported 00:24:20.186 Scatter-Gather List 00:24:20.186 SGL Command Set: Supported 00:24:20.186 SGL Keyed: Supported 00:24:20.186 SGL Bit Bucket Descriptor: Not Supported 00:24:20.186 SGL Metadata Pointer: Not Supported 00:24:20.186 Oversized SGL: Not Supported 00:24:20.186 SGL Metadata Address: Not Supported 00:24:20.186 SGL Offset: Supported 00:24:20.186 Transport SGL Data Block: Not Supported 00:24:20.186 Replay Protected Memory Block: Not Supported 00:24:20.186 00:24:20.186 Firmware Slot Information 00:24:20.186 ========================= 00:24:20.186 Active slot: 0 00:24:20.186 00:24:20.186 00:24:20.186 Error Log 00:24:20.186 ========= 00:24:20.186 00:24:20.186 Active Namespaces 00:24:20.186 ================= 00:24:20.186 Discovery Log Page 00:24:20.186 ================== 00:24:20.186 Generation Counter: 2 00:24:20.186 Number of Records: 2 00:24:20.186 Record Format: 0 00:24:20.186 00:24:20.186 Discovery Log Entry 0 00:24:20.186 ---------------------- 00:24:20.186 Transport Type: 3 (TCP) 00:24:20.186 Address Family: 1 (IPv4) 00:24:20.186 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:20.186 Entry Flags: 00:24:20.186 Duplicate Returned Information: 1 00:24:20.186 Explicit Persistent Connection Support for Discovery: 1 00:24:20.186 Transport Requirements: 00:24:20.186 Secure Channel: Not Required 00:24:20.186 Port ID: 0 (0x0000) 00:24:20.186 Controller ID: 65535 (0xffff) 00:24:20.186 Admin Max SQ Size: 128 00:24:20.186 Transport Service Identifier: 4420 00:24:20.186 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:20.186 Transport Address: 10.0.0.2 00:24:20.186 Discovery Log Entry 1 00:24:20.186 ---------------------- 00:24:20.186 Transport Type: 3 (TCP) 00:24:20.186 Address Family: 1 (IPv4) 00:24:20.186 Subsystem Type: 2 (NVM Subsystem) 00:24:20.186 Entry Flags: 00:24:20.186 Duplicate Returned Information: 0 00:24:20.186 Explicit Persistent Connection Support for Discovery: 0 00:24:20.186 Transport Requirements: 00:24:20.186 Secure Channel: Not Required 00:24:20.186 Port ID: 0 (0x0000) 00:24:20.186 Controller ID: 65535 (0xffff) 00:24:20.186 Admin Max SQ Size: 128 00:24:20.186 Transport Service Identifier: 4420 00:24:20.186 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:20.186 Transport Address: 10.0.0.2 [2024-11-15 11:04:39.547883] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:20.186 [2024-11-15 11:04:39.547894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c100) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.547901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.186 [2024-11-15 11:04:39.547907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c280) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.547912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.186 [2024-11-15 11:04:39.547917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c400) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.547921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.186 [2024-11-15 11:04:39.547926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.547931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.186 [2024-11-15 11:04:39.547945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.547949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.547953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.186 [2024-11-15 11:04:39.547961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.186 [2024-11-15 11:04:39.547976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.186 [2024-11-15 11:04:39.548096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.186 [2024-11-15 11:04:39.548102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.186 [2024-11-15 11:04:39.548108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.548121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.186 [2024-11-15 11:04:39.548135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.186 [2024-11-15 11:04:39.548149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.186 [2024-11-15 11:04:39.548396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.186 [2024-11-15 11:04:39.548402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.186 [2024-11-15 11:04:39.548406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.548416] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:20.186 [2024-11-15 11:04:39.548420] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:20.186 [2024-11-15 11:04:39.548430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.186 [2024-11-15 11:04:39.548444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.186 [2024-11-15 11:04:39.548455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.186 [2024-11-15 11:04:39.548648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.186 [2024-11-15 11:04:39.548655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.186 [2024-11-15 11:04:39.548659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.548674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.186 [2024-11-15 11:04:39.548688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.186 [2024-11-15 11:04:39.548698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.186 [2024-11-15 11:04:39.548913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.186 [2024-11-15 11:04:39.548919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.186 [2024-11-15 11:04:39.548923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.186 [2024-11-15 11:04:39.548936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.186 [2024-11-15 11:04:39.548944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.186 [2024-11-15 11:04:39.548951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.186 [2024-11-15 11:04:39.548961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.549152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.549158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.549162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.549175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.187 [2024-11-15 11:04:39.549189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.549200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.549403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.549409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.549412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.549426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.187 [2024-11-15 11:04:39.549440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.549451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.549657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.549664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.549667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.549681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.187 [2024-11-15 11:04:39.549695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.549706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.549923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.549930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.549933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.549947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.549954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.187 [2024-11-15 11:04:39.549961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.549971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.550211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.550220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.550223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.550227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.550237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.550241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.550244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.187 [2024-11-15 11:04:39.550251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.550261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.550461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.550467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.550470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.550474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.550484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.550488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.550491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xefa690) 00:24:20.187 [2024-11-15 11:04:39.550498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.550509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024-11-15 11:04:39.554573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.554581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.554585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.554589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c580) on tqpair=0xefa690 00:24:20.187 [2024-11-15 11:04:39.554597] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:20.187 00:24:20.187 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:20.187 [2024-11-15 11:04:39.602578] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:24:20.187 [2024-11-15 11:04:39.602626] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486554 ] 00:24:20.187 [2024-11-15 11:04:39.658100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:20.187 [2024-11-15 11:04:39.658165] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:20.187 [2024-11-15 11:04:39.658171] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:20.187 [2024-11-15 11:04:39.658189] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:20.187 [2024-11-15 11:04:39.658201] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:20.187 [2024-11-15 11:04:39.661858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:20.187 [2024-11-15 11:04:39.661903] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x795690 0 00:24:20.187 [2024-11-15 11:04:39.669583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:20.187 [2024-11-15 11:04:39.669599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:20.187 [2024-11-15 11:04:39.669603] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:20.187 [2024-11-15 11:04:39.669607] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:20.187 [2024-11-15 11:04:39.669644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.669650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.669654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.187 [2024-11-15 11:04:39.669669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:20.187 [2024-11-15 11:04:39.669692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.187 [2024-11-15 11:04:39.677580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.677589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.677593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.677597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.187 [2024-11-15 11:04:39.677608] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:20.187 [2024-11-15 11:04:39.677616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:20.187 [2024-11-15 11:04:39.677622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:20.187 [2024-11-15 11:04:39.677636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.677640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.677644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.187 [2024-11-15 11:04:39.677653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.187 [2024-11-15 11:04:39.677670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.187 [2024-11-15 11:04:39.677861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.187 [2024-11-15 11:04:39.677868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.187 [2024-11-15 11:04:39.677871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.677876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.187 [2024-11-15 11:04:39.677881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:20.187 [2024-11-15 11:04:39.677889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:20.187 [2024-11-15 11:04:39.677896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.677900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.187 [2024-11-15 11:04:39.677904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.187 [2024-11-15 11:04:39.677911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.188 [2024-11-15 11:04:39.677922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.678087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.678094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.678097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.678111] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:20.188 [2024-11-15 11:04:39.678120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:20.188 [2024-11-15 11:04:39.678127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.678141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.188 [2024-11-15 11:04:39.678152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.678328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.678334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.678338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.678346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:20.188 [2024-11-15 11:04:39.678356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.678371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.188 [2024-11-15 11:04:39.678381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.678550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.678557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.678560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.678575] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:20.188 [2024-11-15 11:04:39.678580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:20.188 [2024-11-15 11:04:39.678588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:20.188 [2024-11-15 11:04:39.678697] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:20.188 [2024-11-15 11:04:39.678702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:20.188 [2024-11-15 11:04:39.678710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.678724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.188 [2024-11-15 11:04:39.678735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.678913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.678920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.678924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.678932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:20.188 [2024-11-15 11:04:39.678942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.678950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.678956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.188 [2024-11-15 11:04:39.678967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.679139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.679146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.679149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.679158] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:20.188 [2024-11-15 11:04:39.679163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:20.188 [2024-11-15 11:04:39.679171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:20.188 [2024-11-15 11:04:39.679179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:20.188 [2024-11-15 11:04:39.679188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.679199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.188 [2024-11-15 11:04:39.679210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.679456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.188 [2024-11-15 11:04:39.679463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.188 [2024-11-15 11:04:39.679466] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679471] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=4096, cccid=0 00:24:20.188 [2024-11-15 11:04:39.679475] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7100) on tqpair(0x795690): expected_datao=0, payload_size=4096 00:24:20.188 [2024-11-15 11:04:39.679480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679488] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679492] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.679635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.679638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.679650] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:20.188 [2024-11-15 11:04:39.679660] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:20.188 [2024-11-15 11:04:39.679665] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:20.188 [2024-11-15 11:04:39.679672] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:20.188 [2024-11-15 11:04:39.679677] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:20.188 [2024-11-15 11:04:39.679682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:20.188 [2024-11-15 11:04:39.679693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:20.188 [2024-11-15 11:04:39.679700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.679715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.188 [2024-11-15 11:04:39.679727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.188 [2024-11-15 11:04:39.679942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.188 [2024-11-15 11:04:39.679948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.188 [2024-11-15 11:04:39.679952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.188 [2024-11-15 11:04:39.679963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.679977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.188 [2024-11-15 11:04:39.679983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.679990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.679996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.188 [2024-11-15 11:04:39.680002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.680006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.680010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x795690) 00:24:20.188 [2024-11-15 11:04:39.680015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.188 [2024-11-15 11:04:39.680022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.188 [2024-11-15 11:04:39.680025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.189 [2024-11-15 11:04:39.680035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.189 [2024-11-15 11:04:39.680040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.189 [2024-11-15 11:04:39.680070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.189 [2024-11-15 11:04:39.680082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7100, cid 0, qid 0 00:24:20.189 [2024-11-15 11:04:39.680087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7280, cid 1, qid 0 00:24:20.189 [2024-11-15 11:04:39.680092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7400, cid 2, qid 0 00:24:20.189 [2024-11-15 11:04:39.680097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.189 [2024-11-15 11:04:39.680102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.189 [2024-11-15 11:04:39.680364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.189 [2024-11-15 11:04:39.680370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.189 [2024-11-15 11:04:39.680374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.189 [2024-11-15 11:04:39.680385] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:20.189 [2024-11-15 11:04:39.680390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.189 [2024-11-15 11:04:39.680426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.189 [2024-11-15 11:04:39.680437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.189 [2024-11-15 11:04:39.680617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.189 [2024-11-15 11:04:39.680624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.189 [2024-11-15 11:04:39.680628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.189 [2024-11-15 11:04:39.680697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.680715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.189 [2024-11-15 11:04:39.680725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.189 [2024-11-15 11:04:39.680736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.189 [2024-11-15 11:04:39.680938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.189 [2024-11-15 11:04:39.680945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.189 [2024-11-15 11:04:39.680949] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680953] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=4096, cccid=4 00:24:20.189 [2024-11-15 11:04:39.680957] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7700) on tqpair(0x795690): expected_datao=0, payload_size=4096 00:24:20.189 [2024-11-15 11:04:39.680962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680969] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.680973] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.681127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.189 [2024-11-15 11:04:39.681133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.189 [2024-11-15 11:04:39.681137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.681141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.189 [2024-11-15 11:04:39.681150] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:20.189 [2024-11-15 11:04:39.681167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.681177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.681184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.681188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.189 [2024-11-15 11:04:39.681194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.189 [2024-11-15 11:04:39.681206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.189 [2024-11-15 11:04:39.681430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.189 [2024-11-15 11:04:39.681436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.189 [2024-11-15 11:04:39.681440] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.681443] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=4096, cccid=4 00:24:20.189 [2024-11-15 11:04:39.681448] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7700) on tqpair(0x795690): expected_datao=0, payload_size=4096 00:24:20.189 [2024-11-15 11:04:39.681452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.681468] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.681472] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.685576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.189 [2024-11-15 11:04:39.685584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.189 [2024-11-15 11:04:39.685588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.685592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.189 [2024-11-15 11:04:39.685606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.685617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:20.189 [2024-11-15 11:04:39.685624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.685628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.189 [2024-11-15 11:04:39.685637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.189 [2024-11-15 11:04:39.685650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.189 [2024-11-15 11:04:39.685834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.189 [2024-11-15 11:04:39.685841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.189 [2024-11-15 11:04:39.685844] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.685848] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=4096, cccid=4 00:24:20.189 [2024-11-15 11:04:39.685852] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7700) on tqpair(0x795690): expected_datao=0, payload_size=4096 00:24:20.189 [2024-11-15 11:04:39.685857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.685873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.685878] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.686022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.189 [2024-11-15 11:04:39.686028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.189 [2024-11-15 11:04:39.686031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.189 [2024-11-15 11:04:39.686035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.189 [2024-11-15 11:04:39.686043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686082] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:20.190 [2024-11-15 11:04:39.686087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:20.190 [2024-11-15 11:04:39.686093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:20.190 [2024-11-15 11:04:39.686110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.686121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.686129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.686142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.190 [2024-11-15 11:04:39.686157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.190 [2024-11-15 11:04:39.686164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7880, cid 5, qid 0 00:24:20.190 [2024-11-15 11:04:39.686366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.190 [2024-11-15 11:04:39.686372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.190 [2024-11-15 11:04:39.686376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.190 [2024-11-15 11:04:39.686386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.190 [2024-11-15 11:04:39.686392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.190 [2024-11-15 11:04:39.686396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7880) on tqpair=0x795690 00:24:20.190 [2024-11-15 11:04:39.686409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.686420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.686430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7880, cid 5, qid 0 00:24:20.190 [2024-11-15 11:04:39.686627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.190 [2024-11-15 11:04:39.686634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.190 [2024-11-15 11:04:39.686638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7880) on tqpair=0x795690 00:24:20.190 [2024-11-15 11:04:39.686651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.686661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.686672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7880, cid 5, qid 0 00:24:20.190 [2024-11-15 11:04:39.686865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.190 [2024-11-15 11:04:39.686872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.190 [2024-11-15 11:04:39.686875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7880) on tqpair=0x795690 00:24:20.190 [2024-11-15 11:04:39.686888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.686892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.686899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.686909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7880, cid 5, qid 0 00:24:20.190 [2024-11-15 11:04:39.687103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.190 [2024-11-15 11:04:39.687109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.190 [2024-11-15 11:04:39.687112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.687116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7880) on tqpair=0x795690 00:24:20.190 [2024-11-15 11:04:39.687132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.687136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.687143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.687152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.687156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.687162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.687170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.687174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.687180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.687188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.687192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x795690) 00:24:20.190 [2024-11-15 11:04:39.687198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.190 [2024-11-15 11:04:39.687210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7880, cid 5, qid 0 00:24:20.190 [2024-11-15 11:04:39.687215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7700, cid 4, qid 0 00:24:20.190 [2024-11-15 11:04:39.687220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7a00, cid 6, qid 0 00:24:20.190 [2024-11-15 11:04:39.687225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7b80, cid 7, qid 0 00:24:20.190 [2024-11-15 11:04:39.687532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.190 [2024-11-15 11:04:39.687539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.190 [2024-11-15 11:04:39.687542] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.687546] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=8192, cccid=5 00:24:20.190 [2024-11-15 11:04:39.687550] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7880) on tqpair(0x795690): expected_datao=0, payload_size=8192 00:24:20.190 [2024-11-15 11:04:39.687555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691573] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691581] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.190 [2024-11-15 11:04:39.691593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.190 [2024-11-15 11:04:39.691596] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691600] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=512, cccid=4 00:24:20.190 [2024-11-15 11:04:39.691605] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7700) on tqpair(0x795690): expected_datao=0, payload_size=512 00:24:20.190 [2024-11-15 11:04:39.691609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691616] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691620] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.190 [2024-11-15 11:04:39.691631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.190 [2024-11-15 11:04:39.691635] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691638] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=512, cccid=6 00:24:20.190 [2024-11-15 11:04:39.691643] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7a00) on tqpair(0x795690): expected_datao=0, payload_size=512 00:24:20.190 [2024-11-15 11:04:39.691647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691658] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691662] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.190 [2024-11-15 11:04:39.691675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.190 [2024-11-15 11:04:39.691678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x795690): datao=0, datal=4096, cccid=7 00:24:20.190 [2024-11-15 11:04:39.691686] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7f7b80) on tqpair(0x795690): expected_datao=0, payload_size=4096 00:24:20.190 [2024-11-15 11:04:39.691691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691697] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691701] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.190 [2024-11-15 11:04:39.691713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.190 [2024-11-15 11:04:39.691716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.190 [2024-11-15 11:04:39.691720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7880) on tqpair=0x795690 00:24:20.191 [2024-11-15 11:04:39.691734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.191 [2024-11-15 11:04:39.691740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.191 [2024-11-15 11:04:39.691743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.191 [2024-11-15 11:04:39.691747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7700) on tqpair=0x795690 00:24:20.191 [2024-11-15 11:04:39.691758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.191 [2024-11-15 11:04:39.691764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.191 [2024-11-15 11:04:39.691768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.191 [2024-11-15 11:04:39.691771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7a00) on tqpair=0x795690 00:24:20.191 [2024-11-15 11:04:39.691779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.191 [2024-11-15 11:04:39.691784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.191 [2024-11-15 11:04:39.691788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.191 [2024-11-15 11:04:39.691792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7b80) on tqpair=0x795690 00:24:20.191 ===================================================== 00:24:20.191 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.191 ===================================================== 00:24:20.191 Controller Capabilities/Features 00:24:20.191 ================================ 00:24:20.191 Vendor ID: 8086 00:24:20.191 Subsystem Vendor ID: 8086 00:24:20.191 Serial Number: SPDK00000000000001 00:24:20.191 Model Number: SPDK bdev Controller 00:24:20.191 Firmware Version: 25.01 00:24:20.191 Recommended Arb Burst: 6 00:24:20.191 IEEE OUI Identifier: e4 d2 5c 00:24:20.191 Multi-path I/O 00:24:20.191 May have multiple subsystem ports: Yes 00:24:20.191 May have multiple controllers: Yes 00:24:20.191 Associated with SR-IOV VF: No 00:24:20.191 Max Data Transfer Size: 131072 00:24:20.191 Max Number of Namespaces: 32 00:24:20.191 Max Number of I/O Queues: 127 00:24:20.191 NVMe Specification Version (VS): 1.3 00:24:20.191 NVMe Specification Version (Identify): 1.3 00:24:20.191 Maximum Queue Entries: 128 00:24:20.191 Contiguous Queues Required: Yes 00:24:20.191 Arbitration Mechanisms Supported 00:24:20.191 Weighted Round Robin: Not Supported 00:24:20.191 Vendor Specific: Not Supported 00:24:20.191 Reset Timeout: 15000 ms 00:24:20.191 Doorbell Stride: 4 bytes 00:24:20.191 NVM Subsystem Reset: Not Supported 00:24:20.191 Command Sets Supported 00:24:20.191 NVM Command Set: Supported 00:24:20.191 Boot Partition: Not Supported 00:24:20.191 Memory Page Size Minimum: 4096 bytes 00:24:20.191 Memory Page Size Maximum: 4096 bytes 00:24:20.191 Persistent Memory Region: Not Supported 00:24:20.191 Optional Asynchronous Events Supported 00:24:20.191 Namespace Attribute Notices: Supported 00:24:20.191 Firmware Activation Notices: Not Supported 00:24:20.191 ANA Change Notices: Not Supported 00:24:20.191 PLE Aggregate Log Change Notices: Not Supported 00:24:20.191 LBA Status Info Alert Notices: Not Supported 00:24:20.191 EGE Aggregate Log Change Notices: Not Supported 00:24:20.191 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.191 Zone Descriptor Change Notices: Not Supported 00:24:20.191 Discovery Log Change Notices: Not Supported 00:24:20.191 Controller Attributes 00:24:20.191 128-bit Host Identifier: Supported 00:24:20.191 Non-Operational Permissive Mode: Not Supported 00:24:20.191 NVM Sets: Not Supported 00:24:20.191 Read Recovery Levels: Not Supported 00:24:20.191 Endurance Groups: Not Supported 00:24:20.191 Predictable Latency Mode: Not Supported 00:24:20.191 Traffic Based Keep ALive: Not Supported 00:24:20.191 Namespace Granularity: Not Supported 00:24:20.191 SQ Associations: Not Supported 00:24:20.191 UUID List: Not Supported 00:24:20.191 Multi-Domain Subsystem: Not Supported 00:24:20.191 Fixed Capacity Management: Not Supported 00:24:20.191 Variable Capacity Management: Not Supported 00:24:20.191 Delete Endurance Group: Not Supported 00:24:20.191 Delete NVM Set: Not Supported 00:24:20.191 Extended LBA Formats Supported: Not Supported 00:24:20.191 Flexible Data Placement Supported: Not Supported 00:24:20.191 00:24:20.191 Controller Memory Buffer Support 00:24:20.191 ================================ 00:24:20.191 Supported: No 00:24:20.191 00:24:20.191 Persistent Memory Region Support 00:24:20.191 ================================ 00:24:20.191 Supported: No 00:24:20.191 00:24:20.191 Admin Command Set Attributes 00:24:20.191 ============================ 00:24:20.191 Security Send/Receive: Not Supported 00:24:20.191 Format NVM: Not Supported 00:24:20.191 Firmware Activate/Download: Not Supported 00:24:20.191 Namespace Management: Not Supported 00:24:20.191 Device Self-Test: Not Supported 00:24:20.191 Directives: Not Supported 00:24:20.191 NVMe-MI: Not Supported 00:24:20.191 Virtualization Management: Not Supported 00:24:20.191 Doorbell Buffer Config: Not Supported 00:24:20.191 Get LBA Status Capability: Not Supported 00:24:20.191 Command & Feature Lockdown Capability: Not Supported 00:24:20.191 Abort Command Limit: 4 00:24:20.191 Async Event Request Limit: 4 00:24:20.191 Number of Firmware Slots: N/A 00:24:20.191 Firmware Slot 1 Read-Only: N/A 00:24:20.191 Firmware Activation Without Reset: N/A 00:24:20.191 Multiple Update Detection Support: N/A 00:24:20.191 Firmware Update Granularity: No Information Provided 00:24:20.191 Per-Namespace SMART Log: No 00:24:20.191 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.191 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:20.191 Command Effects Log Page: Supported 00:24:20.191 Get Log Page Extended Data: Supported 00:24:20.191 Telemetry Log Pages: Not Supported 00:24:20.191 Persistent Event Log Pages: Not Supported 00:24:20.191 Supported Log Pages Log Page: May Support 00:24:20.191 Commands Supported & Effects Log Page: Not Supported 00:24:20.191 Feature Identifiers & Effects Log Page:May Support 00:24:20.191 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.191 Data Area 4 for Telemetry Log: Not Supported 00:24:20.191 Error Log Page Entries Supported: 128 00:24:20.191 Keep Alive: Supported 00:24:20.191 Keep Alive Granularity: 10000 ms 00:24:20.191 00:24:20.191 NVM Command Set Attributes 00:24:20.191 ========================== 00:24:20.191 Submission Queue Entry Size 00:24:20.191 Max: 64 00:24:20.191 Min: 64 00:24:20.191 Completion Queue Entry Size 00:24:20.191 Max: 16 00:24:20.191 Min: 16 00:24:20.191 Number of Namespaces: 32 00:24:20.191 Compare Command: Supported 00:24:20.191 Write Uncorrectable Command: Not Supported 00:24:20.191 Dataset Management Command: Supported 00:24:20.191 Write Zeroes Command: Supported 00:24:20.191 Set Features Save Field: Not Supported 00:24:20.191 Reservations: Supported 00:24:20.191 Timestamp: Not Supported 00:24:20.191 Copy: Supported 00:24:20.191 Volatile Write Cache: Present 00:24:20.191 Atomic Write Unit (Normal): 1 00:24:20.191 Atomic Write Unit (PFail): 1 00:24:20.191 Atomic Compare & Write Unit: 1 00:24:20.191 Fused Compare & Write: Supported 00:24:20.191 Scatter-Gather List 00:24:20.191 SGL Command Set: Supported 00:24:20.191 SGL Keyed: Supported 00:24:20.191 SGL Bit Bucket Descriptor: Not Supported 00:24:20.191 SGL Metadata Pointer: Not Supported 00:24:20.191 Oversized SGL: Not Supported 00:24:20.191 SGL Metadata Address: Not Supported 00:24:20.191 SGL Offset: Supported 00:24:20.191 Transport SGL Data Block: Not Supported 00:24:20.191 Replay Protected Memory Block: Not Supported 00:24:20.191 00:24:20.191 Firmware Slot Information 00:24:20.191 ========================= 00:24:20.191 Active slot: 1 00:24:20.191 Slot 1 Firmware Revision: 25.01 00:24:20.191 00:24:20.191 00:24:20.191 Commands Supported and Effects 00:24:20.191 ============================== 00:24:20.191 Admin Commands 00:24:20.191 -------------- 00:24:20.191 Get Log Page (02h): Supported 00:24:20.191 Identify (06h): Supported 00:24:20.191 Abort (08h): Supported 00:24:20.191 Set Features (09h): Supported 00:24:20.191 Get Features (0Ah): Supported 00:24:20.191 Asynchronous Event Request (0Ch): Supported 00:24:20.191 Keep Alive (18h): Supported 00:24:20.191 I/O Commands 00:24:20.191 ------------ 00:24:20.191 Flush (00h): Supported LBA-Change 00:24:20.191 Write (01h): Supported LBA-Change 00:24:20.191 Read (02h): Supported 00:24:20.191 Compare (05h): Supported 00:24:20.191 Write Zeroes (08h): Supported LBA-Change 00:24:20.191 Dataset Management (09h): Supported LBA-Change 00:24:20.191 Copy (19h): Supported LBA-Change 00:24:20.191 00:24:20.191 Error Log 00:24:20.191 ========= 00:24:20.191 00:24:20.191 Arbitration 00:24:20.191 =========== 00:24:20.191 Arbitration Burst: 1 00:24:20.191 00:24:20.191 Power Management 00:24:20.191 ================ 00:24:20.192 Number of Power States: 1 00:24:20.192 Current Power State: Power State #0 00:24:20.192 Power State #0: 00:24:20.192 Max Power: 0.00 W 00:24:20.192 Non-Operational State: Operational 00:24:20.192 Entry Latency: Not Reported 00:24:20.192 Exit Latency: Not Reported 00:24:20.192 Relative Read Throughput: 0 00:24:20.192 Relative Read Latency: 0 00:24:20.192 Relative Write Throughput: 0 00:24:20.192 Relative Write Latency: 0 00:24:20.192 Idle Power: Not Reported 00:24:20.192 Active Power: Not Reported 00:24:20.192 Non-Operational Permissive Mode: Not Supported 00:24:20.192 00:24:20.192 Health Information 00:24:20.192 ================== 00:24:20.192 Critical Warnings: 00:24:20.192 Available Spare Space: OK 00:24:20.192 Temperature: OK 00:24:20.192 Device Reliability: OK 00:24:20.192 Read Only: No 00:24:20.192 Volatile Memory Backup: OK 00:24:20.192 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:20.192 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:20.192 Available Spare: 0% 00:24:20.192 Available Spare Threshold: 0% 00:24:20.192 Life Percentage Used:[2024-11-15 11:04:39.691898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.691903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.691911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.691925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7b80, cid 7, qid 0 00:24:20.192 [2024-11-15 11:04:39.692143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.692150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.692153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7b80) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692194] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:20.192 [2024-11-15 11:04:39.692204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7100) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.192 [2024-11-15 11:04:39.692215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7280) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.192 [2024-11-15 11:04:39.692228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7400) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.192 [2024-11-15 11:04:39.692237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.192 [2024-11-15 11:04:39.692251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.692265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.692278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.692499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.692505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.692509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.692534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.692548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.692774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.692781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.692785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.692793] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:20.192 [2024-11-15 11:04:39.692798] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:20.192 [2024-11-15 11:04:39.692808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.692816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.692822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.692833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.693030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.693036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.693040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.693055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.693074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.693085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.693267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.693273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.693277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.693291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.693305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.693315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.693515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.693521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.693524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.693539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.693553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.693570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.693757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.693764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.693767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.693781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.693789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.693796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.693807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.693989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.693995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.192 [2024-11-15 11:04:39.693999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.694002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.192 [2024-11-15 11:04:39.694012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.694017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.192 [2024-11-15 11:04:39.694023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.192 [2024-11-15 11:04:39.694029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.192 [2024-11-15 11:04:39.694040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.192 [2024-11-15 11:04:39.694243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.192 [2024-11-15 11:04:39.694250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.694253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.694267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.694281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.694292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.694474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.694481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.694484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.694498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.694513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.694523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.694711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.694718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.694721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.694735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.694750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.694760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.694926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.694932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.694936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.694950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.694958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.694967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.694978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.695144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.695150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.695154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.695168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.695182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.695193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.695365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.695371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.695375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.695389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.695403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.695414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.695586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.695593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.695596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.695610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.695625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.695636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.695808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.695814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.695818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.695831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.695839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.695846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.695859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.696035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.696042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.696045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.696059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.696074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.696084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.696261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.696267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.696271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.696285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.696299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.696310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.696499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.696506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.696509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.696523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.696531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x795690) 00:24:20.193 [2024-11-15 11:04:39.696538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.193 [2024-11-15 11:04:39.696548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7f7580, cid 3, qid 0 00:24:20.193 [2024-11-15 11:04:39.700574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.193 [2024-11-15 11:04:39.700583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.193 [2024-11-15 11:04:39.700586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.193 [2024-11-15 11:04:39.700590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7f7580) on tqpair=0x795690 00:24:20.193 [2024-11-15 11:04:39.700598] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:20.456 0% 00:24:20.456 Data Units Read: 0 00:24:20.456 Data Units Written: 0 00:24:20.456 Host Read Commands: 0 00:24:20.456 Host Write Commands: 0 00:24:20.456 Controller Busy Time: 0 minutes 00:24:20.456 Power Cycles: 0 00:24:20.456 Power On Hours: 0 hours 00:24:20.456 Unsafe Shutdowns: 0 00:24:20.456 Unrecoverable Media Errors: 0 00:24:20.456 Lifetime Error Log Entries: 0 00:24:20.456 Warning Temperature Time: 0 minutes 00:24:20.456 Critical Temperature Time: 0 minutes 00:24:20.456 00:24:20.456 Number of Queues 00:24:20.456 ================ 00:24:20.456 Number of I/O Submission Queues: 127 00:24:20.456 Number of I/O Completion Queues: 127 00:24:20.456 00:24:20.456 Active Namespaces 00:24:20.456 ================= 00:24:20.456 Namespace ID:1 00:24:20.456 Error Recovery Timeout: Unlimited 00:24:20.456 Command Set Identifier: NVM (00h) 00:24:20.456 Deallocate: Supported 00:24:20.456 Deallocated/Unwritten Error: Not Supported 00:24:20.456 Deallocated Read Value: Unknown 00:24:20.456 Deallocate in Write Zeroes: Not Supported 00:24:20.456 Deallocated Guard Field: 0xFFFF 00:24:20.456 Flush: Supported 00:24:20.456 Reservation: Supported 00:24:20.456 Namespace Sharing Capabilities: Multiple Controllers 00:24:20.456 Size (in LBAs): 131072 (0GiB) 00:24:20.456 Capacity (in LBAs): 131072 (0GiB) 00:24:20.456 Utilization (in LBAs): 131072 (0GiB) 00:24:20.456 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:20.456 EUI64: ABCDEF0123456789 00:24:20.456 UUID: c7da1bd5-b03c-4022-8055-ab4ad68c6267 00:24:20.456 Thin Provisioning: Not Supported 00:24:20.456 Per-NS Atomic Units: Yes 00:24:20.456 Atomic Boundary Size (Normal): 0 00:24:20.456 Atomic Boundary Size (PFail): 0 00:24:20.456 Atomic Boundary Offset: 0 00:24:20.456 Maximum Single Source Range Length: 65535 00:24:20.456 Maximum Copy Length: 65535 00:24:20.456 Maximum Source Range Count: 1 00:24:20.456 NGUID/EUI64 Never Reused: No 00:24:20.456 Namespace Write Protected: No 00:24:20.456 Number of LBA Formats: 1 00:24:20.456 Current LBA Format: LBA Format #00 00:24:20.456 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:20.456 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.456 rmmod nvme_tcp 00:24:20.456 rmmod nvme_fabrics 00:24:20.456 rmmod nvme_keyring 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 486193 ']' 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 486193 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 486193 ']' 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 486193 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 486193 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 486193' 00:24:20.456 killing process with pid 486193 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 486193 00:24:20.456 11:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 486193 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.718 11:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.632 11:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.632 00:24:22.632 real 0m11.700s 00:24:22.632 user 0m8.650s 00:24:22.632 sys 0m6.226s 00:24:22.632 11:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:22.632 11:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.632 ************************************ 00:24:22.632 END TEST nvmf_identify 00:24:22.632 ************************************ 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.893 ************************************ 00:24:22.893 START TEST nvmf_perf 00:24:22.893 ************************************ 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:22.893 * Looking for test storage... 00:24:22.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:22.893 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.156 --rc genhtml_branch_coverage=1 00:24:23.156 --rc genhtml_function_coverage=1 00:24:23.156 --rc genhtml_legend=1 00:24:23.156 --rc geninfo_all_blocks=1 00:24:23.156 --rc geninfo_unexecuted_blocks=1 00:24:23.156 00:24:23.156 ' 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.156 --rc genhtml_branch_coverage=1 00:24:23.156 --rc genhtml_function_coverage=1 00:24:23.156 --rc genhtml_legend=1 00:24:23.156 --rc geninfo_all_blocks=1 00:24:23.156 --rc geninfo_unexecuted_blocks=1 00:24:23.156 00:24:23.156 ' 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.156 --rc genhtml_branch_coverage=1 00:24:23.156 --rc genhtml_function_coverage=1 00:24:23.156 --rc genhtml_legend=1 00:24:23.156 --rc geninfo_all_blocks=1 00:24:23.156 --rc geninfo_unexecuted_blocks=1 00:24:23.156 00:24:23.156 ' 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.156 --rc genhtml_branch_coverage=1 00:24:23.156 --rc genhtml_function_coverage=1 00:24:23.156 --rc genhtml_legend=1 00:24:23.156 --rc geninfo_all_blocks=1 00:24:23.156 --rc geninfo_unexecuted_blocks=1 00:24:23.156 00:24:23.156 ' 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.156 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.157 11:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.303 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.303 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:24:31.303 00:24:31.303 --- 10.0.0.2 ping statistics --- 00:24:31.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.303 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:24:31.303 00:24:31.303 --- 10.0.0.1 ping statistics --- 00:24:31.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.303 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.303 11:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=490716 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 490716 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 490716 ']' 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:31.303 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.303 [2024-11-15 11:04:50.077665] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:24:31.303 [2024-11-15 11:04:50.077740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.303 [2024-11-15 11:04:50.184057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.303 [2024-11-15 11:04:50.237774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.303 [2024-11-15 11:04:50.237826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.303 [2024-11-15 11:04:50.237835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.303 [2024-11-15 11:04:50.237842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.303 [2024-11-15 11:04:50.237848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.303 [2024-11-15 11:04:50.240059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.303 [2024-11-15 11:04:50.240260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.303 [2024-11-15 11:04:50.240425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.303 [2024-11-15 11:04:50.240426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:31.564 11:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:32.186 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:32.186 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:32.186 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:32.186 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:32.481 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:32.481 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:32.481 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:32.481 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:32.481 11:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:32.754 [2024-11-15 11:04:52.050690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.754 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.049 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.050 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.050 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.050 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:33.322 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.660 [2024-11-15 11:04:52.854433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.660 11:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:33.660 11:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:33.660 11:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:33.660 11:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:33.660 11:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:35.099 Initializing NVMe Controllers 00:24:35.099 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:35.099 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:35.099 Initialization complete. Launching workers. 00:24:35.099 ======================================================== 00:24:35.099 Latency(us) 00:24:35.099 Device Information : IOPS MiB/s Average min max 00:24:35.099 PCIE (0000:65:00.0) NSID 1 from core 0: 79174.00 309.27 403.70 13.17 4765.93 00:24:35.099 ======================================================== 00:24:35.100 Total : 79174.00 309.27 403.70 13.17 4765.93 00:24:35.100 00:24:35.100 11:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.485 Initializing NVMe Controllers 00:24:36.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.485 Initialization complete. Launching workers. 00:24:36.485 ======================================================== 00:24:36.485 Latency(us) 00:24:36.485 Device Information : IOPS MiB/s Average min max 00:24:36.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.00 0.27 15037.76 269.82 45940.41 00:24:36.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13244.69 7960.74 48888.34 00:24:36.485 ======================================================== 00:24:36.485 Total : 144.00 0.56 14091.42 269.82 48888.34 00:24:36.485 00:24:36.485 11:04:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:37.870 Initializing NVMe Controllers 00:24:37.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.870 Initialization complete. Launching workers. 00:24:37.870 ======================================================== 00:24:37.870 Latency(us) 00:24:37.870 Device Information : IOPS MiB/s Average min max 00:24:37.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11517.34 44.99 2778.65 380.69 9869.24 00:24:37.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3738.29 14.60 8574.93 7284.99 19783.82 00:24:37.870 ======================================================== 00:24:37.870 Total : 15255.63 59.59 4198.99 380.69 19783.82 00:24:37.870 00:24:37.870 11:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:37.870 11:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:37.870 11:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.492 Initializing NVMe Controllers 00:24:40.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.492 Controller IO queue size 128, less than required. 00:24:40.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.492 Controller IO queue size 128, less than required. 00:24:40.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.492 Initialization complete. Launching workers. 00:24:40.492 ======================================================== 00:24:40.492 Latency(us) 00:24:40.492 Device Information : IOPS MiB/s Average min max 00:24:40.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2416.45 604.11 53664.96 31642.06 89982.07 00:24:40.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.86 152.71 219621.62 71799.88 320558.84 00:24:40.492 ======================================================== 00:24:40.492 Total : 3027.31 756.83 87152.25 31642.06 320558.84 00:24:40.492 00:24:40.492 11:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:40.753 No valid NVMe controllers or AIO or URING devices found 00:24:40.753 Initializing NVMe Controllers 00:24:40.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.753 Controller IO queue size 128, less than required. 00:24:40.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.753 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:40.753 Controller IO queue size 128, less than required. 00:24:40.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.753 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:40.753 WARNING: Some requested NVMe devices were skipped 00:24:40.753 11:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:43.299 Initializing NVMe Controllers 00:24:43.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.299 Controller IO queue size 128, less than required. 00:24:43.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.299 Controller IO queue size 128, less than required. 00:24:43.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.299 Initialization complete. Launching workers. 00:24:43.299 00:24:43.299 ==================== 00:24:43.299 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:43.299 TCP transport: 00:24:43.299 polls: 26734 00:24:43.299 idle_polls: 13448 00:24:43.299 sock_completions: 13286 00:24:43.299 nvme_completions: 7545 00:24:43.299 submitted_requests: 11344 00:24:43.299 queued_requests: 1 00:24:43.299 00:24:43.299 ==================== 00:24:43.299 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:43.299 TCP transport: 00:24:43.299 polls: 30275 00:24:43.299 idle_polls: 19321 00:24:43.299 sock_completions: 10954 00:24:43.299 nvme_completions: 9347 00:24:43.299 submitted_requests: 13964 00:24:43.299 queued_requests: 1 00:24:43.299 ======================================================== 00:24:43.299 Latency(us) 00:24:43.299 Device Information : IOPS MiB/s Average min max 00:24:43.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1885.85 471.46 69049.23 37166.61 116304.63 00:24:43.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2336.31 584.08 55029.76 24352.05 88327.65 00:24:43.299 ======================================================== 00:24:43.300 Total : 4222.16 1055.54 61291.62 24352.05 116304.63 00:24:43.300 00:24:43.300 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:43.300 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.560 rmmod nvme_tcp 00:24:43.560 rmmod nvme_fabrics 00:24:43.560 rmmod nvme_keyring 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 490716 ']' 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 490716 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 490716 ']' 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 490716 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:43.560 11:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 490716 00:24:43.561 11:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:43.561 11:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:43.561 11:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 490716' 00:24:43.561 killing process with pid 490716 00:24:43.561 11:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 490716 00:24:43.561 11:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 490716 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.475 11:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.021 00:24:48.021 real 0m24.819s 00:24:48.021 user 1m0.570s 00:24:48.021 sys 0m8.593s 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:48.021 ************************************ 00:24:48.021 END TEST nvmf_perf 00:24:48.021 ************************************ 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.021 ************************************ 00:24:48.021 START TEST nvmf_fio_host 00:24:48.021 ************************************ 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:48.021 * Looking for test storage... 00:24:48.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.021 --rc genhtml_branch_coverage=1 00:24:48.021 --rc genhtml_function_coverage=1 00:24:48.021 --rc genhtml_legend=1 00:24:48.021 --rc geninfo_all_blocks=1 00:24:48.021 --rc geninfo_unexecuted_blocks=1 00:24:48.021 00:24:48.021 ' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.021 --rc genhtml_branch_coverage=1 00:24:48.021 --rc genhtml_function_coverage=1 00:24:48.021 --rc genhtml_legend=1 00:24:48.021 --rc geninfo_all_blocks=1 00:24:48.021 --rc geninfo_unexecuted_blocks=1 00:24:48.021 00:24:48.021 ' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.021 --rc genhtml_branch_coverage=1 00:24:48.021 --rc genhtml_function_coverage=1 00:24:48.021 --rc genhtml_legend=1 00:24:48.021 --rc geninfo_all_blocks=1 00:24:48.021 --rc geninfo_unexecuted_blocks=1 00:24:48.021 00:24:48.021 ' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.021 --rc genhtml_branch_coverage=1 00:24:48.021 --rc genhtml_function_coverage=1 00:24:48.021 --rc genhtml_legend=1 00:24:48.021 --rc geninfo_all_blocks=1 00:24:48.021 --rc geninfo_unexecuted_blocks=1 00:24:48.021 00:24:48.021 ' 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.021 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.022 11:05:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.168 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:56.169 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:56.169 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:56.169 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:56.169 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:24:56.169 00:24:56.169 --- 10.0.0.2 ping statistics --- 00:24:56.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.169 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:24:56.169 00:24:56.169 --- 10.0.0.1 ping statistics --- 00:24:56.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.169 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.169 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=497898 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 497898 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 497898 ']' 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.170 11:05:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.170 [2024-11-15 11:05:15.045650] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:24:56.170 [2024-11-15 11:05:15.045723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.170 [2024-11-15 11:05:15.147114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.170 [2024-11-15 11:05:15.200213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.170 [2024-11-15 11:05:15.200268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.170 [2024-11-15 11:05:15.200282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.170 [2024-11-15 11:05:15.200289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.170 [2024-11-15 11:05:15.200295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.170 [2024-11-15 11:05:15.202442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.170 [2024-11-15 11:05:15.202613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.170 [2024-11-15 11:05:15.202704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.170 [2024-11-15 11:05:15.202705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.432 11:05:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.432 11:05:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:56.432 11:05:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:56.693 [2024-11-15 11:05:16.025146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.693 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:56.693 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.693 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.693 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:56.953 Malloc1 00:24:56.953 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.213 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.213 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.474 [2024-11-15 11:05:16.902457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.474 11:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:57.736 11:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.998 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:57.998 fio-3.35 00:24:57.998 Starting 1 thread 00:25:00.544 00:25:00.544 test: (groupid=0, jobs=1): err= 0: pid=498486: Fri Nov 15 11:05:19 2024 00:25:00.544 read: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(98.5MiB/2004msec) 00:25:00.544 slat (usec): min=2, max=277, avg= 2.17, stdev= 2.54 00:25:00.544 clat (usec): min=3715, max=9578, avg=5597.78, stdev=1016.31 00:25:00.544 lat (usec): min=3717, max=9584, avg=5599.95, stdev=1016.40 00:25:00.544 clat percentiles (usec): 00:25:00.544 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4752], 20.00th=[ 4883], 00:25:00.544 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5342], 00:25:00.544 | 70.00th=[ 5538], 80.00th=[ 6587], 90.00th=[ 7439], 95.00th=[ 7767], 00:25:00.544 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8848], 99.95th=[ 9241], 00:25:00.544 | 99.99th=[ 9503] 00:25:00.544 bw ( KiB/s): min=37072, max=55440, per=99.88%, avg=50248.00, stdev=8821.37, samples=4 00:25:00.544 iops : min= 9268, max=13860, avg=12562.00, stdev=2205.34, samples=4 00:25:00.544 write: IOPS=12.6k, BW=49.1MiB/s (51.4MB/s)(98.3MiB/2004msec); 0 zone resets 00:25:00.544 slat (usec): min=2, max=269, avg= 2.23, stdev= 1.88 00:25:00.544 clat (usec): min=2896, max=8054, avg=4543.36, stdev=822.59 00:25:00.544 lat (usec): min=2914, max=8178, avg=4545.59, stdev=822.71 00:25:00.544 clat percentiles (usec): 00:25:00.544 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3851], 20.00th=[ 3982], 00:25:00.544 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4359], 00:25:00.544 | 70.00th=[ 4490], 80.00th=[ 5342], 90.00th=[ 5997], 95.00th=[ 6325], 00:25:00.544 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7635], 00:25:00.544 | 99.99th=[ 7832] 00:25:00.544 bw ( KiB/s): min=37960, max=55680, per=100.00%, avg=50248.00, stdev=8331.08, samples=4 00:25:00.544 iops : min= 9490, max=13920, avg=12562.00, stdev=2082.77, samples=4 00:25:00.544 lat (msec) : 4=11.44%, 10=88.56% 00:25:00.544 cpu : usr=74.64%, sys=24.16%, ctx=33, majf=0, minf=17 00:25:00.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:00.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.544 issued rwts: total=25205,25170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.544 00:25:00.544 Run status group 0 (all jobs): 00:25:00.544 READ: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=98.5MiB (103MB), run=2004-2004msec 00:25:00.544 WRITE: bw=49.1MiB/s (51.4MB/s), 49.1MiB/s-49.1MiB/s (51.4MB/s-51.4MB/s), io=98.3MiB (103MB), run=2004-2004msec 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:00.544 11:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:00.544 11:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.122 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:01.122 fio-3.35 00:25:01.122 Starting 1 thread 00:25:03.686 00:25:03.686 test: (groupid=0, jobs=1): err= 0: pid=499306: Fri Nov 15 11:05:22 2024 00:25:03.686 read: IOPS=9438, BW=147MiB/s (155MB/s)(295MiB/2002msec) 00:25:03.686 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.68 00:25:03.686 clat (usec): min=1353, max=53224, avg=8466.11, stdev=3796.51 00:25:03.686 lat (usec): min=1357, max=53227, avg=8469.72, stdev=3796.58 00:25:03.686 clat percentiles (usec): 00:25:03.686 | 1.00th=[ 4178], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6456], 00:25:03.686 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8094], 60.00th=[ 8717], 00:25:03.686 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:25:03.686 | 99.00th=[13435], 99.50th=[44827], 99.90th=[52167], 99.95th=[52691], 00:25:03.686 | 99.99th=[53216] 00:25:03.686 bw ( KiB/s): min=64992, max=82720, per=49.22%, avg=74328.00, stdev=8007.86, samples=4 00:25:03.686 iops : min= 4062, max= 5170, avg=4645.50, stdev=500.49, samples=4 00:25:03.686 write: IOPS=5634, BW=88.0MiB/s (92.3MB/s)(152MiB/1725msec); 0 zone resets 00:25:03.686 slat (usec): min=39, max=448, avg=41.07, stdev= 9.03 00:25:03.686 clat (usec): min=1787, max=16172, avg=9090.32, stdev=1366.97 00:25:03.686 lat (usec): min=1827, max=16312, avg=9131.40, stdev=1369.56 00:25:03.686 clat percentiles (usec): 00:25:03.686 | 1.00th=[ 6521], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:25:03.686 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:25:03.686 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11207], 00:25:03.686 | 99.00th=[12911], 99.50th=[13566], 99.90th=[15926], 99.95th=[16057], 00:25:03.686 | 99.99th=[16188] 00:25:03.686 bw ( KiB/s): min=66784, max=85440, per=85.66%, avg=77216.00, stdev=8262.74, samples=4 00:25:03.686 iops : min= 4174, max= 5340, avg=4826.00, stdev=516.42, samples=4 00:25:03.686 lat (msec) : 2=0.05%, 4=0.39%, 10=77.14%, 20=21.97%, 50=0.27% 00:25:03.686 lat (msec) : 100=0.18% 00:25:03.686 cpu : usr=84.76%, sys=13.84%, ctx=22, majf=0, minf=31 00:25:03.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:03.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:03.686 issued rwts: total=18896,9719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:03.686 00:25:03.686 Run status group 0 (all jobs): 00:25:03.686 READ: bw=147MiB/s (155MB/s), 147MiB/s-147MiB/s (155MB/s-155MB/s), io=295MiB (310MB), run=2002-2002msec 00:25:03.686 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=152MiB (159MB), run=1725-1725msec 00:25:03.686 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.687 rmmod nvme_tcp 00:25:03.687 rmmod nvme_fabrics 00:25:03.687 rmmod nvme_keyring 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 497898 ']' 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 497898 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 497898 ']' 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 497898 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:03.687 11:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 497898 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 497898' 00:25:03.687 killing process with pid 497898 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 497898 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 497898 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.687 11:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.233 00:25:06.233 real 0m18.120s 00:25:06.233 user 0m57.204s 00:25:06.233 sys 0m7.962s 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.233 ************************************ 00:25:06.233 END TEST nvmf_fio_host 00:25:06.233 ************************************ 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.233 ************************************ 00:25:06.233 START TEST nvmf_failover 00:25:06.233 ************************************ 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.233 * Looking for test storage... 00:25:06.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.233 --rc genhtml_branch_coverage=1 00:25:06.233 --rc genhtml_function_coverage=1 00:25:06.233 --rc genhtml_legend=1 00:25:06.233 --rc geninfo_all_blocks=1 00:25:06.233 --rc geninfo_unexecuted_blocks=1 00:25:06.233 00:25:06.233 ' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.233 --rc genhtml_branch_coverage=1 00:25:06.233 --rc genhtml_function_coverage=1 00:25:06.233 --rc genhtml_legend=1 00:25:06.233 --rc geninfo_all_blocks=1 00:25:06.233 --rc geninfo_unexecuted_blocks=1 00:25:06.233 00:25:06.233 ' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.233 --rc genhtml_branch_coverage=1 00:25:06.233 --rc genhtml_function_coverage=1 00:25:06.233 --rc genhtml_legend=1 00:25:06.233 --rc geninfo_all_blocks=1 00:25:06.233 --rc geninfo_unexecuted_blocks=1 00:25:06.233 00:25:06.233 ' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.233 --rc genhtml_branch_coverage=1 00:25:06.233 --rc genhtml_function_coverage=1 00:25:06.233 --rc genhtml_legend=1 00:25:06.233 --rc geninfo_all_blocks=1 00:25:06.233 --rc geninfo_unexecuted_blocks=1 00:25:06.233 00:25:06.233 ' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.233 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.234 11:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.377 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:14.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:14.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:14.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:14.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.378 11:05:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:25:14.378 00:25:14.378 --- 10.0.0.2 ping statistics --- 00:25:14.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.378 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:25:14.378 00:25:14.378 --- 10.0.0.1 ping statistics --- 00:25:14.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.378 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=503969 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 503969 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 503969 ']' 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:14.378 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.378 [2024-11-15 11:05:33.132773] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:25:14.378 [2024-11-15 11:05:33.132841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.378 [2024-11-15 11:05:33.233636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.378 [2024-11-15 11:05:33.285866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.378 [2024-11-15 11:05:33.285915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.378 [2024-11-15 11:05:33.285923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.378 [2024-11-15 11:05:33.285930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.378 [2024-11-15 11:05:33.285937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.379 [2024-11-15 11:05:33.288084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.379 [2024-11-15 11:05:33.288220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.379 [2024-11-15 11:05:33.288221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.639 11:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.639 [2024-11-15 11:05:34.164391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.900 11:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:14.900 Malloc0 00:25:15.161 11:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.161 11:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.421 11:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.682 [2024-11-15 11:05:34.997630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.682 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.682 [2024-11-15 11:05:35.194278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:15.944 [2024-11-15 11:05:35.387007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=504337 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 504337 /var/tmp/bdevperf.sock 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 504337 ']' 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.944 11:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.886 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.886 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:16.886 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.146 NVMe0n1 00:25:17.146 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.406 00:25:17.406 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=504677 00:25:17.406 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.406 11:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:18.347 11:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.607 [2024-11-15 11:05:37.955610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 [2024-11-15 11:05:37.955782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566ed0 is same with the state(6) to be set 00:25:18.607 11:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:21.904 11:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:21.904 00:25:21.904 11:05:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.165 [2024-11-15 11:05:41.471005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.165 [2024-11-15 11:05:41.471040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.165 [2024-11-15 11:05:41.471045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.166 [2024-11-15 11:05:41.471374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 [2024-11-15 11:05:41.471462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567cf0 is same with the state(6) to be set 00:25:22.167 11:05:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:25.466 11:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.466 [2024-11-15 11:05:44.661643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.466 11:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:26.407 11:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:26.407 [2024-11-15 11:05:45.850807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 [2024-11-15 11:05:45.850847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 [2024-11-15 11:05:45.850853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 [2024-11-15 11:05:45.850858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 [2024-11-15 11:05:45.850862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 [2024-11-15 11:05:45.850867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 [2024-11-15 11:05:45.850871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568bf0 is same with the state(6) to be set 00:25:26.407 11:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 504677 00:25:32.998 { 00:25:32.998 "results": [ 00:25:32.998 { 00:25:32.998 "job": "NVMe0n1", 00:25:32.998 "core_mask": "0x1", 00:25:32.998 "workload": "verify", 00:25:32.998 "status": "finished", 00:25:32.998 "verify_range": { 00:25:32.998 "start": 0, 00:25:32.998 "length": 16384 00:25:32.998 }, 00:25:32.998 "queue_depth": 128, 00:25:32.998 "io_size": 4096, 00:25:32.998 "runtime": 15.043863, 00:25:32.998 "iops": 12510.41703849603, 00:25:32.998 "mibps": 48.86881655662512, 00:25:32.998 "io_failed": 5333, 00:25:32.998 "io_timeout": 0, 00:25:32.998 "avg_latency_us": 9902.365473240397, 00:25:32.998 "min_latency_us": 532.48, 00:25:32.998 "max_latency_us": 42816.85333333333 00:25:32.998 } 00:25:32.998 ], 00:25:32.998 "core_count": 1 00:25:32.998 } 00:25:32.998 11:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 504337 00:25:32.998 11:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 504337 ']' 00:25:32.998 11:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 504337 00:25:32.998 11:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:32.998 11:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:32.998 11:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 504337 00:25:32.998 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:32.998 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:32.998 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 504337' 00:25:32.998 killing process with pid 504337 00:25:32.998 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 504337 00:25:32.998 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 504337 00:25:32.998 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.998 [2024-11-15 11:05:35.465058] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:25:32.998 [2024-11-15 11:05:35.465141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504337 ] 00:25:32.998 [2024-11-15 11:05:35.560052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.998 [2024-11-15 11:05:35.613198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.998 Running I/O for 15 seconds... 00:25:32.998 12251.00 IOPS, 47.86 MiB/s [2024-11-15T10:05:52.525Z] [2024-11-15 11:05:37.957155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.998 [2024-11-15 11:05:37.957188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-11-15 11:05:37.957205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.999 [2024-11-15 11:05:37.957490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-11-15 11:05:37.957848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.999 [2024-11-15 11:05:37.957855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.957985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.957994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-11-15 11:05:37.958517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-11-15 11:05:37.958526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-11-15 11:05:37.958841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.958991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.001 [2024-11-15 11:05:37.959157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-11-15 11:05:37.959166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-11-15 11:05:37.959173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-11-15 11:05:37.959189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-11-15 11:05:37.959206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:37.959339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.002 [2024-11-15 11:05:37.959366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.002 [2024-11-15 11:05:37.959373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106224 len:8 PRP1 0x0 PRP2 0x0 00:25:33.002 [2024-11-15 11:05:37.959381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959425] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.002 [2024-11-15 11:05:37.959448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.002 [2024-11-15 11:05:37.959456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.002 [2024-11-15 11:05:37.959472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.002 [2024-11-15 11:05:37.959488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.002 [2024-11-15 11:05:37.959503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:37.959510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:33.002 [2024-11-15 11:05:37.963392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:33.002 [2024-11-15 11:05:37.963417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511d70 (9): Bad file descriptor 00:25:33.002 [2024-11-15 11:05:38.000097] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:33.002 11788.50 IOPS, 46.05 MiB/s [2024-11-15T10:05:52.529Z] 11546.00 IOPS, 45.10 MiB/s [2024-11-15T10:05:52.529Z] 11834.50 IOPS, 46.23 MiB/s [2024-11-15T10:05:52.529Z] [2024-11-15 11:05:41.472979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-11-15 11:05:41.473208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-11-15 11:05:41.473214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-11-15 11:05:41.473500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-11-15 11:05:41.473512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-11-15 11:05:41.473524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-11-15 11:05:41.473535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-11-15 11:05:41.473546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-11-15 11:05:41.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-11-15 11:05:41.473568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-11-15 11:05:41.473918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.473991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.473996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.474002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.474007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-11-15 11:05:41.474013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-11-15 11:05:41.474018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-11-15 11:05:41.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57216 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57224 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57232 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57240 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57248 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57256 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57264 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57272 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57280 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-11-15 11:05:41.474463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-11-15 11:05:41.474468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57288 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-11-15 11:05:41.474473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-11-15 11:05:41.474478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.474482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.474486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57296 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.474491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.474498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.474502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.474506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57304 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.474511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.474516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.474520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.474524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57312 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.474532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.474538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.474541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.474545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57320 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.474551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.474556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.474560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.474568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57328 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.474573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.485882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.485905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.485913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57336 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.485921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.485928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.485932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.485937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.485942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.485948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-11-15 11:05:41.485952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-11-15 11:05:41.485957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57352 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-11-15 11:05:41.485962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.485999] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:33.006 [2024-11-15 11:05:41.486023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.006 [2024-11-15 11:05:41.486033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.486041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.006 [2024-11-15 11:05:41.486047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.486053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.006 [2024-11-15 11:05:41.486058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.486064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.006 [2024-11-15 11:05:41.486069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:41.486075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:33.006 [2024-11-15 11:05:41.486109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511d70 (9): Bad file descriptor 00:25:33.006 [2024-11-15 11:05:41.488822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:33.006 [2024-11-15 11:05:41.525280] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:33.006 11869.60 IOPS, 46.37 MiB/s [2024-11-15T10:05:52.533Z] 12060.00 IOPS, 47.11 MiB/s [2024-11-15T10:05:52.533Z] 12175.00 IOPS, 47.56 MiB/s [2024-11-15T10:05:52.533Z] 12313.75 IOPS, 48.10 MiB/s [2024-11-15T10:05:52.533Z] [2024-11-15 11:05:45.851040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.006 [2024-11-15 11:05:45.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-11-15 11:05:45.851289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.006 [2024-11-15 11:05:45.851294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-11-15 11:05:45.851700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.007 [2024-11-15 11:05:45.851705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.851988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.851995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.852012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.852023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.008 [2024-11-15 11:05:45.852034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.008 [2024-11-15 11:05:45.852047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.008 [2024-11-15 11:05:45.852058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.008 [2024-11-15 11:05:45.852069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.008 [2024-11-15 11:05:45.852081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.008 [2024-11-15 11:05:45.852092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.852103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.852114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.008 [2024-11-15 11:05:45.852126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-11-15 11:05:45.852132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.009 [2024-11-15 11:05:45.852288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.009 [2024-11-15 11:05:45.852552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-11-15 11:05:45.852558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534ad0 is same with the state(6) to be set 00:25:33.009 [2024-11-15 11:05:45.852568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-11-15 11:05:45.852573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-11-15 11:05:45.852577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128416 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-11-15 11:05:45.852582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-11-15 11:05:45.852616] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:33.010 [2024-11-15 11:05:45.852633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.010 [2024-11-15 11:05:45.852639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-11-15 11:05:45.852645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.010 [2024-11-15 11:05:45.852650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-11-15 11:05:45.852656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.010 [2024-11-15 11:05:45.852661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-11-15 11:05:45.852666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.010 [2024-11-15 11:05:45.852673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-11-15 11:05:45.852678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:33.010 [2024-11-15 11:05:45.855342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:33.010 [2024-11-15 11:05:45.855362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511d70 (9): Bad file descriptor 00:25:33.010 [2024-11-15 11:05:45.884783] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:33.010 12323.44 IOPS, 48.14 MiB/s [2024-11-15T10:05:52.537Z] 12389.70 IOPS, 48.40 MiB/s [2024-11-15T10:05:52.537Z] 12432.64 IOPS, 48.56 MiB/s [2024-11-15T10:05:52.537Z] 12465.83 IOPS, 48.69 MiB/s [2024-11-15T10:05:52.537Z] 12498.54 IOPS, 48.82 MiB/s [2024-11-15T10:05:52.537Z] 12521.36 IOPS, 48.91 MiB/s [2024-11-15T10:05:52.537Z] 12546.47 IOPS, 49.01 MiB/s 00:25:33.010 Latency(us) 00:25:33.010 [2024-11-15T10:05:52.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.010 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.010 Verification LBA range: start 0x0 length 0x4000 00:25:33.010 NVMe0n1 : 15.04 12510.42 48.87 354.50 0.00 9902.37 532.48 42816.85 00:25:33.010 [2024-11-15T10:05:52.537Z] =================================================================================================================== 00:25:33.010 [2024-11-15T10:05:52.537Z] Total : 12510.42 48.87 354.50 0.00 9902.37 532.48 42816.85 00:25:33.010 Received shutdown signal, test time was about 15.000000 seconds 00:25:33.010 00:25:33.010 Latency(us) 00:25:33.010 [2024-11-15T10:05:52.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.010 [2024-11-15T10:05:52.537Z] =================================================================================================================== 00:25:33.010 [2024-11-15T10:05:52.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=507688 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 507688 /var/tmp/bdevperf.sock 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 507688 ']' 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:33.010 11:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.580 11:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:33.580 11:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:33.581 11:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.841 [2024-11-15 11:05:53.151305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:33.841 11:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:33.841 [2024-11-15 11:05:53.327713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:33.841 11:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.410 NVMe0n1 00:25:34.410 11:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.670 00:25:34.670 11:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.930 00:25:34.930 11:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.930 11:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:35.190 11:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.190 11:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:38.491 11:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.491 11:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:38.491 11:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=508723 00:25:38.491 11:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.491 11:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 508723 00:25:39.873 { 00:25:39.873 "results": [ 00:25:39.873 { 00:25:39.873 "job": "NVMe0n1", 00:25:39.873 "core_mask": "0x1", 00:25:39.873 "workload": "verify", 00:25:39.873 "status": "finished", 00:25:39.873 "verify_range": { 00:25:39.873 "start": 0, 00:25:39.873 "length": 16384 00:25:39.873 }, 00:25:39.873 "queue_depth": 128, 00:25:39.873 "io_size": 4096, 00:25:39.873 "runtime": 1.0053, 00:25:39.873 "iops": 12816.07480354123, 00:25:39.873 "mibps": 50.06279220133293, 00:25:39.873 "io_failed": 0, 00:25:39.873 "io_timeout": 0, 00:25:39.873 "avg_latency_us": 9942.529748525303, 00:25:39.873 "min_latency_us": 1140.0533333333333, 00:25:39.873 "max_latency_us": 15073.28 00:25:39.873 } 00:25:39.873 ], 00:25:39.873 "core_count": 1 00:25:39.873 } 00:25:39.873 11:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.873 [2024-11-15 11:05:52.210360] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:25:39.873 [2024-11-15 11:05:52.210420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507688 ] 00:25:39.873 [2024-11-15 11:05:52.295988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.873 [2024-11-15 11:05:52.324574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.873 [2024-11-15 11:05:54.662920] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:39.873 [2024-11-15 11:05:54.662956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.873 [2024-11-15 11:05:54.662965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.873 [2024-11-15 11:05:54.662972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.873 [2024-11-15 11:05:54.662977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.873 [2024-11-15 11:05:54.662983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.873 [2024-11-15 11:05:54.662988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.873 [2024-11-15 11:05:54.662993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.873 [2024-11-15 11:05:54.662998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.873 [2024-11-15 11:05:54.663004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:39.873 [2024-11-15 11:05:54.663022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:39.873 [2024-11-15 11:05:54.663033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a42d70 (9): Bad file descriptor 00:25:39.873 [2024-11-15 11:05:54.674198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:39.873 Running I/O for 1 seconds... 00:25:39.873 12740.00 IOPS, 49.77 MiB/s 00:25:39.873 Latency(us) 00:25:39.873 [2024-11-15T10:05:59.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.873 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:39.873 Verification LBA range: start 0x0 length 0x4000 00:25:39.873 NVMe0n1 : 1.01 12816.07 50.06 0.00 0.00 9942.53 1140.05 15073.28 00:25:39.874 [2024-11-15T10:05:59.401Z] =================================================================================================================== 00:25:39.874 [2024-11-15T10:05:59.401Z] Total : 12816.07 50.06 0.00 0.00 9942.53 1140.05 15073.28 00:25:39.874 11:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.874 11:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:39.874 11:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.874 11:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.874 11:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:40.134 11:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.393 11:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 507688 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 507688 ']' 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 507688 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 507688 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 507688' 00:25:43.690 killing process with pid 507688 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 507688 00:25:43.690 11:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 507688 00:25:43.691 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:43.691 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.950 rmmod nvme_tcp 00:25:43.950 rmmod nvme_fabrics 00:25:43.950 rmmod nvme_keyring 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 503969 ']' 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 503969 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 503969 ']' 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 503969 00:25:43.950 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 503969 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 503969' 00:25:43.951 killing process with pid 503969 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 503969 00:25:43.951 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 503969 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.211 11:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.121 11:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.121 00:25:46.121 real 0m40.290s 00:25:46.121 user 2m3.695s 00:25:46.121 sys 0m8.805s 00:25:46.121 11:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:46.121 11:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:46.121 ************************************ 00:25:46.121 END TEST nvmf_failover 00:25:46.121 ************************************ 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.382 ************************************ 00:25:46.382 START TEST nvmf_host_discovery 00:25:46.382 ************************************ 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:46.382 * Looking for test storage... 00:25:46.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:46.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.382 --rc genhtml_branch_coverage=1 00:25:46.382 --rc genhtml_function_coverage=1 00:25:46.382 --rc genhtml_legend=1 00:25:46.382 --rc geninfo_all_blocks=1 00:25:46.382 --rc geninfo_unexecuted_blocks=1 00:25:46.382 00:25:46.382 ' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:46.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.382 --rc genhtml_branch_coverage=1 00:25:46.382 --rc genhtml_function_coverage=1 00:25:46.382 --rc genhtml_legend=1 00:25:46.382 --rc geninfo_all_blocks=1 00:25:46.382 --rc geninfo_unexecuted_blocks=1 00:25:46.382 00:25:46.382 ' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:46.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.382 --rc genhtml_branch_coverage=1 00:25:46.382 --rc genhtml_function_coverage=1 00:25:46.382 --rc genhtml_legend=1 00:25:46.382 --rc geninfo_all_blocks=1 00:25:46.382 --rc geninfo_unexecuted_blocks=1 00:25:46.382 00:25:46.382 ' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:46.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.382 --rc genhtml_branch_coverage=1 00:25:46.382 --rc genhtml_function_coverage=1 00:25:46.382 --rc genhtml_legend=1 00:25:46.382 --rc geninfo_all_blocks=1 00:25:46.382 --rc geninfo_unexecuted_blocks=1 00:25:46.382 00:25:46.382 ' 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.382 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.643 11:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:54.779 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:54.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:54.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:54.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:54.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:25:54.780 00:25:54.780 --- 10.0.0.2 ping statistics --- 00:25:54.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.780 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:25:54.780 00:25:54.780 --- 10.0.0.1 ping statistics --- 00:25:54.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.780 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.780 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=514633 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 514633 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 514633 ']' 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.781 11:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.781 [2024-11-15 11:06:13.638313] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:25:54.781 [2024-11-15 11:06:13.638378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.781 [2024-11-15 11:06:13.737046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.781 [2024-11-15 11:06:13.787519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.781 [2024-11-15 11:06:13.787575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.781 [2024-11-15 11:06:13.787584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.781 [2024-11-15 11:06:13.787591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.781 [2024-11-15 11:06:13.787597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.781 [2024-11-15 11:06:13.788374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.071 [2024-11-15 11:06:14.502209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.071 [2024-11-15 11:06:14.514502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.071 null0 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.071 null1 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=514789 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 514789 /tmp/host.sock 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 514789 ']' 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:55.071 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:55.071 11:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.367 [2024-11-15 11:06:14.620898] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:25:55.367 [2024-11-15 11:06:14.620963] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514789 ] 00:25:55.367 [2024-11-15 11:06:14.713220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.368 [2024-11-15 11:06:14.766793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.968 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.968 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:55.968 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.968 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:55.968 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.968 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.969 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.229 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.230 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.491 [2024-11-15 11:06:15.809862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:56.491 11:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:57.062 [2024-11-15 11:06:16.521605] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:57.062 [2024-11-15 11:06:16.521637] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:57.062 [2024-11-15 11:06:16.521652] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.322 [2024-11-15 11:06:16.648050] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:57.322 [2024-11-15 11:06:16.831415] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:57.323 [2024-11-15 11:06:16.832622] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1def7a0:1 started. 00:25:57.323 [2024-11-15 11:06:16.834587] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:57.323 [2024-11-15 11:06:16.834617] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:57.583 [2024-11-15 11:06:16.881405] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1def7a0 was disconnected and freed. delete nvme_qpair. 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:57.583 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:57.844 [2024-11-15 11:06:17.206008] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1dbe0a0:1 started. 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.844 [2024-11-15 11:06:17.210966] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1dbe0a0 was disconnected and freed. delete nvme_qpair. 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 [2024-11-15 11:06:17.309663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.844 [2024-11-15 11:06:17.310196] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:57.844 [2024-11-15 11:06:17.310218] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:57.844 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.104 [2024-11-15 11:06:17.438615] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:58.104 11:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:58.104 [2024-11-15 11:06:17.497500] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:58.104 [2024-11-15 11:06:17.497542] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.104 [2024-11-15 11:06:17.497556] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:58.104 [2024-11-15 11:06:17.497566] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.043 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.306 [2024-11-15 11:06:18.581472] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:59.306 [2024-11-15 11:06:18.581499] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.306 [2024-11-15 11:06:18.585757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.306 [2024-11-15 11:06:18.585777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.306 [2024-11-15 11:06:18.585786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.306 [2024-11-15 11:06:18.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.306 [2024-11-15 11:06:18.585802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.306 [2024-11-15 11:06:18.585809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.306 [2024-11-15 11:06:18.585817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.306 [2024-11-15 11:06:18.585825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.306 [2024-11-15 11:06:18.585832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.306 [2024-11-15 11:06:18.595769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.306 [2024-11-15 11:06:18.605807] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.306 [2024-11-15 11:06:18.605821] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.306 [2024-11-15 11:06:18.605826] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.605831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.306 [2024-11-15 11:06:18.605848] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.606173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.306 [2024-11-15 11:06:18.606188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.306 [2024-11-15 11:06:18.606202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.306 [2024-11-15 11:06:18.606214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.306 [2024-11-15 11:06:18.606225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.306 [2024-11-15 11:06:18.606232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.306 [2024-11-15 11:06:18.606240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.306 [2024-11-15 11:06:18.606247] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.306 [2024-11-15 11:06:18.606252] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.306 [2024-11-15 11:06:18.606257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.306 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.306 [2024-11-15 11:06:18.615876] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.306 [2024-11-15 11:06:18.615884] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.306 [2024-11-15 11:06:18.615887] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.615891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.306 [2024-11-15 11:06:18.615900] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.616186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.306 [2024-11-15 11:06:18.616194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.306 [2024-11-15 11:06:18.616199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.306 [2024-11-15 11:06:18.616207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.306 [2024-11-15 11:06:18.616214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.306 [2024-11-15 11:06:18.616219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.306 [2024-11-15 11:06:18.616224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.306 [2024-11-15 11:06:18.616228] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.306 [2024-11-15 11:06:18.616231] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.306 [2024-11-15 11:06:18.616234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.306 [2024-11-15 11:06:18.625929] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.306 [2024-11-15 11:06:18.625939] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.306 [2024-11-15 11:06:18.625942] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.625945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.306 [2024-11-15 11:06:18.625955] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.626158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.306 [2024-11-15 11:06:18.626167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.306 [2024-11-15 11:06:18.626172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.306 [2024-11-15 11:06:18.626180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.306 [2024-11-15 11:06:18.626187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.306 [2024-11-15 11:06:18.626192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.306 [2024-11-15 11:06:18.626197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.306 [2024-11-15 11:06:18.626201] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.306 [2024-11-15 11:06:18.626205] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.306 [2024-11-15 11:06:18.626208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.306 [2024-11-15 11:06:18.635983] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.306 [2024-11-15 11:06:18.635993] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.306 [2024-11-15 11:06:18.635996] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.636000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.306 [2024-11-15 11:06:18.636010] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.306 [2024-11-15 11:06:18.636293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.306 [2024-11-15 11:06:18.636303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.306 [2024-11-15 11:06:18.636309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.306 [2024-11-15 11:06:18.636319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.307 [2024-11-15 11:06:18.636328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.307 [2024-11-15 11:06:18.636333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.307 [2024-11-15 11:06:18.636340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.307 [2024-11-15 11:06:18.636346] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.307 [2024-11-15 11:06:18.636351] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.307 [2024-11-15 11:06:18.636356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:59.307 [2024-11-15 11:06:18.646039] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.307 [2024-11-15 11:06:18.646048] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.307 [2024-11-15 11:06:18.646052] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.307 [2024-11-15 11:06:18.646055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.307 [2024-11-15 11:06:18.646064] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.307 [2024-11-15 11:06:18.646344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.307 [2024-11-15 11:06:18.646353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.307 [2024-11-15 11:06:18.646360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.307 [2024-11-15 11:06:18.646372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.307 [2024-11-15 11:06:18.646380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.307 [2024-11-15 11:06:18.646384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.307 [2024-11-15 11:06:18.646389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.307 [2024-11-15 11:06:18.646393] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.307 [2024-11-15 11:06:18.646397] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.307 [2024-11-15 11:06:18.646400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.307 [2024-11-15 11:06:18.656094] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.307 [2024-11-15 11:06:18.656105] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.307 [2024-11-15 11:06:18.656108] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.307 [2024-11-15 11:06:18.656112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.307 [2024-11-15 11:06:18.656122] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.307 [2024-11-15 11:06:18.656396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.307 [2024-11-15 11:06:18.656404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.307 [2024-11-15 11:06:18.656410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.307 [2024-11-15 11:06:18.656421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.307 [2024-11-15 11:06:18.656429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.307 [2024-11-15 11:06:18.656433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.307 [2024-11-15 11:06:18.656438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.307 [2024-11-15 11:06:18.656443] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.307 [2024-11-15 11:06:18.656446] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.307 [2024-11-15 11:06:18.656449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.307 [2024-11-15 11:06:18.666151] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:59.307 [2024-11-15 11:06:18.666158] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:59.307 [2024-11-15 11:06:18.666162] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.307 [2024-11-15 11:06:18.666165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.307 [2024-11-15 11:06:18.666174] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:59.307 [2024-11-15 11:06:18.666447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.307 [2024-11-15 11:06:18.666455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbfe10 with addr=10.0.0.2, port=4420 00:25:59.307 [2024-11-15 11:06:18.666460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbfe10 is same with the state(6) to be set 00:25:59.307 [2024-11-15 11:06:18.666467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbfe10 (9): Bad file descriptor 00:25:59.307 [2024-11-15 11:06:18.666474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.307 [2024-11-15 11:06:18.666478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.307 [2024-11-15 11:06:18.666483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.307 [2024-11-15 11:06:18.666487] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.307 [2024-11-15 11:06:18.666491] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.307 [2024-11-15 11:06:18.666494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.307 [2024-11-15 11:06:18.668844] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:59.307 [2024-11-15 11:06:18.668856] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.307 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.308 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.568 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.569 11:06:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.510 [2024-11-15 11:06:20.000491] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:00.510 [2024-11-15 11:06:20.000506] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:00.510 [2024-11-15 11:06:20.000515] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.770 [2024-11-15 11:06:20.127896] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.030 [2024-11-15 11:06:20.394247] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:01.030 [2024-11-15 11:06:20.394919] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1f279a0:1 started. 00:26:01.030 [2024-11-15 11:06:20.396374] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.030 [2024-11-15 11:06:20.396400] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.030 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.030 request: 00:26:01.030 { 00:26:01.030 "name": "nvme", 00:26:01.030 "trtype": "tcp", 00:26:01.030 "traddr": "10.0.0.2", 00:26:01.030 "adrfam": "ipv4", 00:26:01.030 "trsvcid": "8009", 00:26:01.030 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.030 "wait_for_attach": true, 00:26:01.030 "method": "bdev_nvme_start_discovery", 00:26:01.030 "req_id": 1 00:26:01.030 } 00:26:01.030 Got JSON-RPC error response 00:26:01.030 response: 00:26:01.030 { 00:26:01.031 "code": -17, 00:26:01.031 "message": "File exists" 00:26:01.031 } 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.031 [2024-11-15 11:06:20.439257] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1f279a0 was disconnected and freed. delete nvme_qpair. 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.031 request: 00:26:01.031 { 00:26:01.031 "name": "nvme_second", 00:26:01.031 "trtype": "tcp", 00:26:01.031 "traddr": "10.0.0.2", 00:26:01.031 "adrfam": "ipv4", 00:26:01.031 "trsvcid": "8009", 00:26:01.031 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.031 "wait_for_attach": true, 00:26:01.031 "method": "bdev_nvme_start_discovery", 00:26:01.031 "req_id": 1 00:26:01.031 } 00:26:01.031 Got JSON-RPC error response 00:26:01.031 response: 00:26:01.031 { 00:26:01.031 "code": -17, 00:26:01.031 "message": "File exists" 00:26:01.031 } 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.031 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.292 11:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.233 [2024-11-15 11:06:21.660000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.233 [2024-11-15 11:06:21.660024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1220 with addr=10.0.0.2, port=8010 00:26:02.233 [2024-11-15 11:06:21.660038] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:02.233 [2024-11-15 11:06:21.660044] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.233 [2024-11-15 11:06:21.660049] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.172 [2024-11-15 11:06:22.662333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.172 [2024-11-15 11:06:22.662353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1220 with addr=10.0.0.2, port=8010 00:26:03.172 [2024-11-15 11:06:22.662361] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.172 [2024-11-15 11:06:22.662366] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.172 [2024-11-15 11:06:22.662370] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.554 [2024-11-15 11:06:23.664342] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:04.554 request: 00:26:04.554 { 00:26:04.554 "name": "nvme_second", 00:26:04.554 "trtype": "tcp", 00:26:04.554 "traddr": "10.0.0.2", 00:26:04.554 "adrfam": "ipv4", 00:26:04.554 "trsvcid": "8010", 00:26:04.554 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:04.554 "wait_for_attach": false, 00:26:04.554 "attach_timeout_ms": 3000, 00:26:04.554 "method": "bdev_nvme_start_discovery", 00:26:04.554 "req_id": 1 00:26:04.554 } 00:26:04.554 Got JSON-RPC error response 00:26:04.554 response: 00:26:04.554 { 00:26:04.554 "code": -110, 00:26:04.554 "message": "Connection timed out" 00:26:04.554 } 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 514789 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.554 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.555 rmmod nvme_tcp 00:26:04.555 rmmod nvme_fabrics 00:26:04.555 rmmod nvme_keyring 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 514633 ']' 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 514633 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 514633 ']' 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 514633 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 514633 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 514633' 00:26:04.555 killing process with pid 514633 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 514633 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 514633 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.555 11:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.097 00:26:07.097 real 0m20.334s 00:26:07.097 user 0m23.383s 00:26:07.097 sys 0m7.281s 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.097 ************************************ 00:26:07.097 END TEST nvmf_host_discovery 00:26:07.097 ************************************ 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.097 ************************************ 00:26:07.097 START TEST nvmf_host_multipath_status 00:26:07.097 ************************************ 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.097 * Looking for test storage... 00:26:07.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:07.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.097 --rc genhtml_branch_coverage=1 00:26:07.097 --rc genhtml_function_coverage=1 00:26:07.097 --rc genhtml_legend=1 00:26:07.097 --rc geninfo_all_blocks=1 00:26:07.097 --rc geninfo_unexecuted_blocks=1 00:26:07.097 00:26:07.097 ' 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:07.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.097 --rc genhtml_branch_coverage=1 00:26:07.097 --rc genhtml_function_coverage=1 00:26:07.097 --rc genhtml_legend=1 00:26:07.097 --rc geninfo_all_blocks=1 00:26:07.097 --rc geninfo_unexecuted_blocks=1 00:26:07.097 00:26:07.097 ' 00:26:07.097 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:07.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.097 --rc genhtml_branch_coverage=1 00:26:07.097 --rc genhtml_function_coverage=1 00:26:07.097 --rc genhtml_legend=1 00:26:07.097 --rc geninfo_all_blocks=1 00:26:07.097 --rc geninfo_unexecuted_blocks=1 00:26:07.097 00:26:07.097 ' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:07.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.098 --rc genhtml_branch_coverage=1 00:26:07.098 --rc genhtml_function_coverage=1 00:26:07.098 --rc genhtml_legend=1 00:26:07.098 --rc geninfo_all_blocks=1 00:26:07.098 --rc geninfo_unexecuted_blocks=1 00:26:07.098 00:26:07.098 ' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.098 11:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:15.238 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:15.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:15.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:15.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:15.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:26:15.238 00:26:15.238 --- 10.0.0.2 ping statistics --- 00:26:15.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.238 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:26:15.238 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:26:15.239 00:26:15.239 --- 10.0.0.1 ping statistics --- 00:26:15.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.239 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=520890 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 520890 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 520890 ']' 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.239 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.239 [2024-11-15 11:06:33.919498] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:26:15.239 [2024-11-15 11:06:33.919574] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.239 [2024-11-15 11:06:34.018002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:15.239 [2024-11-15 11:06:34.070157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.239 [2024-11-15 11:06:34.070208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.239 [2024-11-15 11:06:34.070216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.239 [2024-11-15 11:06:34.070224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.239 [2024-11-15 11:06:34.070230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.239 [2024-11-15 11:06:34.071961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.239 [2024-11-15 11:06:34.071965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.239 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.239 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:15.239 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.239 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:15.239 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.500 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.500 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=520890 00:26:15.500 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:15.500 [2024-11-15 11:06:34.940822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.500 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:15.761 Malloc0 00:26:15.761 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:16.021 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:16.283 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.283 [2024-11-15 11:06:35.751560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.283 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:16.545 [2024-11-15 11:06:35.948020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=521380 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 521380 /var/tmp/bdevperf.sock 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 521380 ']' 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:16.545 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:17.489 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.489 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:17.489 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:17.749 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:18.010 Nvme0n1 00:26:18.010 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:18.581 Nvme0n1 00:26:18.581 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:18.581 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:20.492 11:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:20.492 11:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:20.753 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.013 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:21.953 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:21.953 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:21.953 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.953 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.213 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.473 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.473 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.473 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.473 11:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.733 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.993 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.993 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:22.993 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.254 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:23.514 11:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:24.455 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:24.455 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:24.455 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.455 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.716 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.716 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:24.716 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.716 11:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.716 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.716 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.716 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.716 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.977 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.977 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.977 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.977 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.238 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.498 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.498 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:25.499 11:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.761 11:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:25.761 11:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.148 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.410 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.410 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.410 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.410 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.671 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.671 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.671 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.671 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.671 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.671 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.671 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.671 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.931 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.931 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:27.931 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:28.192 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.192 11:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.577 11:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.577 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.577 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.577 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.578 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.838 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.838 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.838 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.838 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.099 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.099 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.099 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.099 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:30.359 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:30.620 11:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:30.884 11:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:31.828 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:31.828 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:31.828 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.828 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.089 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.349 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.349 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.349 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.349 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.610 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.610 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.610 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.610 11:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.610 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.610 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:32.610 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.610 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.869 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.869 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:32.869 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:33.129 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.129 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.512 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.512 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.512 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.512 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.512 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.773 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.773 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.773 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.773 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.033 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.033 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:35.033 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.033 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.293 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.293 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.293 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.293 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.293 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.293 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:35.553 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:35.553 11:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:35.812 11:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:35.812 11:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.192 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.451 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.451 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.451 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.451 11:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.711 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.711 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.711 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.711 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:37.971 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.233 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.493 11:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:39.435 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:39.435 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.435 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.435 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.696 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.696 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.696 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.696 11:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.696 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.696 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.696 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:39.696 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.956 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.956 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:39.956 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.956 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.217 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.479 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.479 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:40.479 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.740 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:41.000 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:41.943 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:41.943 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:41.943 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.943 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.204 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.465 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.465 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.465 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.465 11:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.727 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.988 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.988 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:42.988 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.250 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:43.511 11:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.454 11:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.715 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.715 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.715 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.715 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.977 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.977 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.977 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.977 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.238 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 521380 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 521380 ']' 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 521380 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 521380 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 521380' 00:26:45.499 killing process with pid 521380 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 521380 00:26:45.499 11:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 521380 00:26:45.499 { 00:26:45.499 "results": [ 00:26:45.499 { 00:26:45.499 "job": "Nvme0n1", 00:26:45.499 "core_mask": "0x4", 00:26:45.499 "workload": "verify", 00:26:45.499 "status": "terminated", 00:26:45.499 "verify_range": { 00:26:45.499 "start": 0, 00:26:45.499 "length": 16384 00:26:45.499 }, 00:26:45.499 "queue_depth": 128, 00:26:45.499 "io_size": 4096, 00:26:45.499 "runtime": 26.87421, 00:26:45.499 "iops": 11903.977828557565, 00:26:45.499 "mibps": 46.49991339280299, 00:26:45.499 "io_failed": 0, 00:26:45.499 "io_timeout": 0, 00:26:45.499 "avg_latency_us": 10734.027510570682, 00:26:45.499 "min_latency_us": 610.9866666666667, 00:26:45.499 "max_latency_us": 3019898.88 00:26:45.499 } 00:26:45.499 ], 00:26:45.499 "core_count": 1 00:26:45.499 } 00:26:45.776 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 521380 00:26:45.776 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.776 [2024-11-15 11:06:36.030236] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:26:45.776 [2024-11-15 11:06:36.030321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521380 ] 00:26:45.776 [2024-11-15 11:06:36.122299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.776 [2024-11-15 11:06:36.173412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.776 Running I/O for 90 seconds... 00:26:45.776 9796.00 IOPS, 38.27 MiB/s [2024-11-15T10:07:05.303Z] 10499.00 IOPS, 41.01 MiB/s [2024-11-15T10:07:05.303Z] 10733.33 IOPS, 41.93 MiB/s [2024-11-15T10:07:05.303Z] 11261.75 IOPS, 43.99 MiB/s [2024-11-15T10:07:05.303Z] 11645.60 IOPS, 45.49 MiB/s [2024-11-15T10:07:05.303Z] 11851.83 IOPS, 46.30 MiB/s [2024-11-15T10:07:05.303Z] 11991.00 IOPS, 46.84 MiB/s [2024-11-15T10:07:05.303Z] 12086.00 IOPS, 47.21 MiB/s [2024-11-15T10:07:05.303Z] 12172.33 IOPS, 47.55 MiB/s [2024-11-15T10:07:05.303Z] 12231.30 IOPS, 47.78 MiB/s [2024-11-15T10:07:05.303Z] 12290.00 IOPS, 48.01 MiB/s [2024-11-15T10:07:05.303Z] [2024-11-15 11:06:49.986861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.776 [2024-11-15 11:06:49.986892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.986923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.986930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.986941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.986946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.986957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.986962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.986973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.986978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.986988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.986994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.776 [2024-11-15 11:06:49.987341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.776 [2024-11-15 11:06:49.987347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.777 [2024-11-15 11:06:49.987393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.777 [2024-11-15 11:06:49.987408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.777 [2024-11-15 11:06:49.987423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.777 [2024-11-15 11:06:49.987439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.777 [2024-11-15 11:06:49.987457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.777 [2024-11-15 11:06:49.987543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.777 [2024-11-15 11:06:49.987925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.777 [2024-11-15 11:06:49.987930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.987942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.987951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.778 [2024-11-15 11:06:49.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.778 [2024-11-15 11:06:49.988716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.988988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.988994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.779 [2024-11-15 11:06:49.989408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.779 [2024-11-15 11:06:49.989423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:06:49.989428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:06:49.989445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:06:49.989452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:06:49.989468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:06:49.989473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:06:49.989488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:06:49.989494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:06:49.989510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:06:49.989515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.780 12259.25 IOPS, 47.89 MiB/s [2024-11-15T10:07:05.307Z] 11316.23 IOPS, 44.20 MiB/s [2024-11-15T10:07:05.307Z] 10507.93 IOPS, 41.05 MiB/s [2024-11-15T10:07:05.307Z] 9860.00 IOPS, 38.52 MiB/s [2024-11-15T10:07:05.307Z] 10046.06 IOPS, 39.24 MiB/s [2024-11-15T10:07:05.307Z] 10220.00 IOPS, 39.92 MiB/s [2024-11-15T10:07:05.307Z] 10564.33 IOPS, 41.27 MiB/s [2024-11-15T10:07:05.307Z] 10892.37 IOPS, 42.55 MiB/s [2024-11-15T10:07:05.307Z] 11104.00 IOPS, 43.38 MiB/s [2024-11-15T10:07:05.307Z] 11184.57 IOPS, 43.69 MiB/s [2024-11-15T10:07:05.307Z] 11252.55 IOPS, 43.96 MiB/s [2024-11-15T10:07:05.307Z] 11446.17 IOPS, 44.71 MiB/s [2024-11-15T10:07:05.307Z] 11682.67 IOPS, 45.64 MiB/s [2024-11-15T10:07:05.307Z] [2024-11-15 11:07:02.752661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.752698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.752715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.752721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.752732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.752743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.752753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.752759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.752769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.752774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.780 [2024-11-15 11:07:02.753891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.780 [2024-11-15 11:07:02.753903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.753908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.753918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.753923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.753934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.753939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.753949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.753954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.753964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.753970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.753980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.753985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.753996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.781 [2024-11-15 11:07:02.754602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.781 [2024-11-15 11:07:02.754607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.754617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.754622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.754633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.754638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.754648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.754654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.782 [2024-11-15 11:07:02.755714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.782 [2024-11-15 11:07:02.755729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.782 [2024-11-15 11:07:02.755760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.755990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.782 [2024-11-15 11:07:02.755995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.756839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.782 [2024-11-15 11:07:02.756850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.756862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.782 [2024-11-15 11:07:02.756867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.756877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.782 [2024-11-15 11:07:02.756883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.782 [2024-11-15 11:07:02.756893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.756911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.756926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.756943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.756958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.756974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.756990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.756995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.757364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.783 [2024-11-15 11:07:02.757553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.757998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.783 [2024-11-15 11:07:02.758008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.783 [2024-11-15 11:07:02.758021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.784 [2024-11-15 11:07:02.758403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.758525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.758530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.759807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.759821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.759834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.759839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.759850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.759855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.759866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.784 [2024-11-15 11:07:02.759871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.784 [2024-11-15 11:07:02.759882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.759887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.759906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.759922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.759938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.759953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.759985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.759995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.760269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.760328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.760333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.761587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.785 [2024-11-15 11:07:02.761605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.761708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.785 [2024-11-15 11:07:02.761716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.785 [2024-11-15 11:07:02.762549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.786 [2024-11-15 11:07:02.762965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.762990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.762995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.763005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.763011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.763021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.763026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.763036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.763041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.763052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.763057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.763067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.763072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.786 [2024-11-15 11:07:02.763082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.786 [2024-11-15 11:07:02.763087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.763098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.763103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.763113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.763119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.763129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.763136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.763146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.763151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.771479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.771496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.771512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.771527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.771543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.771575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.771580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.787 [2024-11-15 11:07:02.773743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.787 [2024-11-15 11:07:02.773758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.787 [2024-11-15 11:07:02.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.773790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.773806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.773822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.773837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.773853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.773868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.773878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.773883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.774501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.774651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.774729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.774744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.774806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.774811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.775219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.775236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.775251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.775266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.775282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.775297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.775313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.775328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.788 [2024-11-15 11:07:02.775344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.775361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.788 [2024-11-15 11:07:02.775377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.788 [2024-11-15 11:07:02.775387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.775392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.775408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.775424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.775939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.775955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.775970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.775986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.775996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.776001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.776017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.776853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.776877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.776898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.776919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.776940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.776961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.776982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.776998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.777006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.777193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.777257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.789 [2024-11-15 11:07:02.777298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.789 [2024-11-15 11:07:02.777319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.789 [2024-11-15 11:07:02.777333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.777717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.777751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.777760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.779964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.779980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.779996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.780003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.790 [2024-11-15 11:07:02.780149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.780169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.780190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.780214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.790 [2024-11-15 11:07:02.780235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.790 [2024-11-15 11:07:02.780249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.780886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.780900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.780907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.782324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.791 [2024-11-15 11:07:02.782353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.782377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.782402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.782424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.782445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.791 [2024-11-15 11:07:02.782458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.791 [2024-11-15 11:07:02.782466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.782480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.782487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.782501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.782508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.792 [2024-11-15 11:07:02.783758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.783834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.783842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.792 [2024-11-15 11:07:02.785203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.792 [2024-11-15 11:07:02.785218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.785688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.785730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.786795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.786810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.786825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.786832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.786846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.786853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.786867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.786874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.786888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.786895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.793 [2024-11-15 11:07:02.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.793 [2024-11-15 11:07:02.787771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.793 [2024-11-15 11:07:02.787785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.787792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.787813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.787961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.787995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.788949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.788963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.788970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.789719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.789729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.789740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.794 [2024-11-15 11:07:02.789746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.789756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.794 [2024-11-15 11:07:02.789761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.794 [2024-11-15 11:07:02.789771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.789841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.789919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.789934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.789950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.789966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.789981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.789991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.789998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.790107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.790153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.790169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.790185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.790196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.790201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.791880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.791897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.795 [2024-11-15 11:07:02.791912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.791927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.791943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.795 [2024-11-15 11:07:02.791957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.795 [2024-11-15 11:07:02.791968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.791972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.791982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.791988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.791998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.792003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.792112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.792158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.792174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.792299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.792402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.792407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.796 [2024-11-15 11:07:02.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.796 [2024-11-15 11:07:02.793303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.796 [2024-11-15 11:07:02.793313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.793318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.793333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.793351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.793366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.793382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.793398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.793413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.793979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.793990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.797 [2024-11-15 11:07:02.794980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.794990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.794995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.795006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.797 [2024-11-15 11:07:02.795021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.797 [2024-11-15 11:07:02.795026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.795908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.795986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.795996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.796493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.796509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.798 [2024-11-15 11:07:02.796555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.796579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.798 [2024-11-15 11:07:02.796594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.798 [2024-11-15 11:07:02.796604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.796625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.796641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.796703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.796750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.796760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.796766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.797783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.797798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.797907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.797923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.797985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.797995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.798000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.798010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.798015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.798025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.798031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.798041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.798057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.799 [2024-11-15 11:07:02.798062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.799 [2024-11-15 11:07:02.798073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.799 [2024-11-15 11:07:02.798078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.798088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.798095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.798105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.798110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.798121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.798126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.799931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.799957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.799962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.800546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.800567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.800583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.800599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.800 [2024-11-15 11:07:02.800614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.800630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.800664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.800 [2024-11-15 11:07:02.800675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.800 [2024-11-15 11:07:02.800680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.800789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.800863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.800869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.801767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.801778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.801783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.802092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.802125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.802158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.802174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.802190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.801 [2024-11-15 11:07:02.802205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.801 [2024-11-15 11:07:02.802216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.801 [2024-11-15 11:07:02.802221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.802237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.802252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.802268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.802283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.802299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.802314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.802330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.802341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.803964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.803991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.803997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.804007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.804012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.804022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.804028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.804038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.802 [2024-11-15 11:07:02.804043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.804053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.802 [2024-11-15 11:07:02.804059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.802 [2024-11-15 11:07:02.804069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.804074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.804914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.804927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.804948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.804954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.804965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.804970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.804980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.804995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.805080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.805096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.805111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.805126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.805220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.803 [2024-11-15 11:07:02.805235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.803 [2024-11-15 11:07:02.805261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.803 [2024-11-15 11:07:02.805266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.803 11830.72 IOPS, 46.21 MiB/s [2024-11-15T10:07:05.330Z] 11870.04 IOPS, 46.37 MiB/s [2024-11-15T10:07:05.330Z] Received shutdown signal, test time was about 26.874818 seconds 00:26:45.803 00:26:45.803 Latency(us) 00:26:45.803 [2024-11-15T10:07:05.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.803 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:45.803 Verification LBA range: start 0x0 length 0x4000 00:26:45.803 Nvme0n1 : 26.87 11903.98 46.50 0.00 0.00 10734.03 610.99 3019898.88 00:26:45.803 [2024-11-15T10:07:05.330Z] =================================================================================================================== 00:26:45.803 [2024-11-15T10:07:05.330Z] Total : 11903.98 46.50 0.00 0.00 10734.03 610.99 3019898.88 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:45.803 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:45.803 rmmod nvme_tcp 00:26:45.803 rmmod nvme_fabrics 00:26:46.064 rmmod nvme_keyring 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 520890 ']' 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 520890 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 520890 ']' 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 520890 00:26:46.064 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 520890 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 520890' 00:26:46.065 killing process with pid 520890 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 520890 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 520890 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.065 11:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.612 00:26:48.612 real 0m41.475s 00:26:48.612 user 1m47.424s 00:26:48.612 sys 0m11.599s 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.612 ************************************ 00:26:48.612 END TEST nvmf_host_multipath_status 00:26:48.612 ************************************ 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.612 ************************************ 00:26:48.612 START TEST nvmf_discovery_remove_ifc 00:26:48.612 ************************************ 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:48.612 * Looking for test storage... 00:26:48.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.612 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:48.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.613 --rc genhtml_branch_coverage=1 00:26:48.613 --rc genhtml_function_coverage=1 00:26:48.613 --rc genhtml_legend=1 00:26:48.613 --rc geninfo_all_blocks=1 00:26:48.613 --rc geninfo_unexecuted_blocks=1 00:26:48.613 00:26:48.613 ' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:48.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.613 --rc genhtml_branch_coverage=1 00:26:48.613 --rc genhtml_function_coverage=1 00:26:48.613 --rc genhtml_legend=1 00:26:48.613 --rc geninfo_all_blocks=1 00:26:48.613 --rc geninfo_unexecuted_blocks=1 00:26:48.613 00:26:48.613 ' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:48.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.613 --rc genhtml_branch_coverage=1 00:26:48.613 --rc genhtml_function_coverage=1 00:26:48.613 --rc genhtml_legend=1 00:26:48.613 --rc geninfo_all_blocks=1 00:26:48.613 --rc geninfo_unexecuted_blocks=1 00:26:48.613 00:26:48.613 ' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:48.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.613 --rc genhtml_branch_coverage=1 00:26:48.613 --rc genhtml_function_coverage=1 00:26:48.613 --rc genhtml_legend=1 00:26:48.613 --rc geninfo_all_blocks=1 00:26:48.613 --rc geninfo_unexecuted_blocks=1 00:26:48.613 00:26:48.613 ' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:48.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.613 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.614 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.614 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.614 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.614 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.614 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.614 11:07:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:56.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:56.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.756 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:56.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:56.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:26:56.757 00:26:56.757 --- 10.0.0.2 ping statistics --- 00:26:56.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.757 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:26:56.757 00:26:56.757 --- 10.0.0.1 ping statistics --- 00:26:56.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.757 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=531416 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 531416 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 531416 ']' 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:56.757 11:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.757 [2024-11-15 11:07:15.469841] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:26:56.757 [2024-11-15 11:07:15.469903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.757 [2024-11-15 11:07:15.568862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.757 [2024-11-15 11:07:15.618670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.757 [2024-11-15 11:07:15.618718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.757 [2024-11-15 11:07:15.618727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.757 [2024-11-15 11:07:15.618734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.757 [2024-11-15 11:07:15.618740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.757 [2024-11-15 11:07:15.619502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.758 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:56.758 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:56.758 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.758 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.758 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.019 [2024-11-15 11:07:16.338116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.019 [2024-11-15 11:07:16.346395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:57.019 null0 00:26:57.019 [2024-11-15 11:07:16.378331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=531630 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 531630 /tmp/host.sock 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 531630 ']' 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:57.019 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:57.019 11:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.019 [2024-11-15 11:07:16.454713] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:26:57.019 [2024-11-15 11:07:16.454775] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531630 ] 00:26:57.019 [2024-11-15 11:07:16.547242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.279 [2024-11-15 11:07:16.600743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.851 11:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 [2024-11-15 11:07:18.427502] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:59.235 [2024-11-15 11:07:18.427522] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:59.235 [2024-11-15 11:07:18.427536] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:59.235 [2024-11-15 11:07:18.556977] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:59.235 [2024-11-15 11:07:18.615704] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:59.235 [2024-11-15 11:07:18.616696] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2343410:1 started. 00:26:59.235 [2024-11-15 11:07:18.618242] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:59.235 [2024-11-15 11:07:18.618286] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:59.235 [2024-11-15 11:07:18.618308] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:59.235 [2024-11-15 11:07:18.618322] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:59.235 [2024-11-15 11:07:18.618342] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 [2024-11-15 11:07:18.667348] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2343410 was disconnected and freed. delete nvme_qpair. 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:59.235 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.495 11:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.438 11:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.379 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.639 11:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.580 11:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.580 11:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.580 11:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.522 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.783 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.783 11:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.725 [2024-11-15 11:07:24.059118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:04.725 [2024-11-15 11:07:24.059154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.725 [2024-11-15 11:07:24.059163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.725 [2024-11-15 11:07:24.059170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.725 [2024-11-15 11:07:24.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.725 [2024-11-15 11:07:24.059182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.725 [2024-11-15 11:07:24.059187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.725 [2024-11-15 11:07:24.059193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.725 [2024-11-15 11:07:24.059198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.725 [2024-11-15 11:07:24.059203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.725 [2024-11-15 11:07:24.059212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.725 [2024-11-15 11:07:24.059218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231fc00 is same with the state(6) to be set 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.725 [2024-11-15 11:07:24.069140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231fc00 (9): Bad file descriptor 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.725 11:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.725 [2024-11-15 11:07:24.079173] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:04.725 [2024-11-15 11:07:24.079183] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:04.725 [2024-11-15 11:07:24.079187] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:04.725 [2024-11-15 11:07:24.079190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:04.725 [2024-11-15 11:07:24.079208] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:05.665 [2024-11-15 11:07:25.111631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:05.665 [2024-11-15 11:07:25.111721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231fc00 with addr=10.0.0.2, port=4420 00:27:05.665 [2024-11-15 11:07:25.111753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231fc00 is same with the state(6) to be set 00:27:05.665 [2024-11-15 11:07:25.111808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231fc00 (9): Bad file descriptor 00:27:05.665 [2024-11-15 11:07:25.111932] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:05.665 [2024-11-15 11:07:25.111989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:05.665 [2024-11-15 11:07:25.112012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:05.665 [2024-11-15 11:07:25.112036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:05.665 [2024-11-15 11:07:25.112057] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:05.665 [2024-11-15 11:07:25.112073] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:05.665 [2024-11-15 11:07:25.112088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:05.665 [2024-11-15 11:07:25.112110] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:05.665 [2024-11-15 11:07:25.112125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:05.665 11:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.665 11:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.665 11:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.608 [2024-11-15 11:07:26.114531] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:06.608 [2024-11-15 11:07:26.114547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:06.608 [2024-11-15 11:07:26.114555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:06.608 [2024-11-15 11:07:26.114560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:06.608 [2024-11-15 11:07:26.114567] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:06.608 [2024-11-15 11:07:26.114573] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:06.608 [2024-11-15 11:07:26.114576] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:06.608 [2024-11-15 11:07:26.114579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:06.608 [2024-11-15 11:07:26.114596] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:06.608 [2024-11-15 11:07:26.114612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.608 [2024-11-15 11:07:26.114619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.608 [2024-11-15 11:07:26.114626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.608 [2024-11-15 11:07:26.114631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.608 [2024-11-15 11:07:26.114637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.608 [2024-11-15 11:07:26.114642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.608 [2024-11-15 11:07:26.114647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.608 [2024-11-15 11:07:26.114653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.608 [2024-11-15 11:07:26.114658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.608 [2024-11-15 11:07:26.114664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.608 [2024-11-15 11:07:26.114669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:06.608 [2024-11-15 11:07:26.114897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230f340 (9): Bad file descriptor 00:27:06.608 [2024-11-15 11:07:26.115907] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:06.608 [2024-11-15 11:07:26.115914] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.869 11:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.810 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:08.070 11:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.013 [2024-11-15 11:07:28.171582] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:09.013 [2024-11-15 11:07:28.171598] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:09.013 [2024-11-15 11:07:28.171607] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:09.013 [2024-11-15 11:07:28.299985] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.013 [2024-11-15 11:07:28.401763] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:09.013 [2024-11-15 11:07:28.402444] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x231ea60:1 started. 00:27:09.013 [2024-11-15 11:07:28.403333] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:09.013 [2024-11-15 11:07:28.403359] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:09.013 [2024-11-15 11:07:28.403374] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:09.013 [2024-11-15 11:07:28.403384] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:09.013 [2024-11-15 11:07:28.403390] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:09.013 11:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.013 [2024-11-15 11:07:28.451146] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x231ea60 was disconnected and freed. delete nvme_qpair. 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 531630 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 531630 ']' 00:27:09.986 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 531630 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 531630 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 531630' 00:27:10.283 killing process with pid 531630 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 531630 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 531630 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.283 rmmod nvme_tcp 00:27:10.283 rmmod nvme_fabrics 00:27:10.283 rmmod nvme_keyring 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 531416 ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 531416 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 531416 ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 531416 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 531416 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 531416' 00:27:10.283 killing process with pid 531416 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 531416 00:27:10.283 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 531416 00:27:10.588 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.588 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.588 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.588 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.589 11:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.582 11:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.582 00:27:12.582 real 0m24.302s 00:27:12.582 user 0m29.266s 00:27:12.582 sys 0m7.151s 00:27:12.582 11:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:12.582 11:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.582 ************************************ 00:27:12.582 END TEST nvmf_discovery_remove_ifc 00:27:12.582 ************************************ 00:27:12.582 11:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:12.582 11:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:12.582 11:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:12.582 11:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.582 ************************************ 00:27:12.582 START TEST nvmf_identify_kernel_target 00:27:12.582 ************************************ 00:27:12.582 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:12.844 * Looking for test storage... 00:27:12.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:12.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.844 --rc genhtml_branch_coverage=1 00:27:12.844 --rc genhtml_function_coverage=1 00:27:12.844 --rc genhtml_legend=1 00:27:12.844 --rc geninfo_all_blocks=1 00:27:12.844 --rc geninfo_unexecuted_blocks=1 00:27:12.844 00:27:12.844 ' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:12.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.844 --rc genhtml_branch_coverage=1 00:27:12.844 --rc genhtml_function_coverage=1 00:27:12.844 --rc genhtml_legend=1 00:27:12.844 --rc geninfo_all_blocks=1 00:27:12.844 --rc geninfo_unexecuted_blocks=1 00:27:12.844 00:27:12.844 ' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:12.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.844 --rc genhtml_branch_coverage=1 00:27:12.844 --rc genhtml_function_coverage=1 00:27:12.844 --rc genhtml_legend=1 00:27:12.844 --rc geninfo_all_blocks=1 00:27:12.844 --rc geninfo_unexecuted_blocks=1 00:27:12.844 00:27:12.844 ' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:12.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.844 --rc genhtml_branch_coverage=1 00:27:12.844 --rc genhtml_function_coverage=1 00:27:12.844 --rc genhtml_legend=1 00:27:12.844 --rc geninfo_all_blocks=1 00:27:12.844 --rc geninfo_unexecuted_blocks=1 00:27:12.844 00:27:12.844 ' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.844 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:12.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:12.845 11:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:20.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:20.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.987 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:20.988 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:20.988 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:20.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:27:20.988 00:27:20.988 --- 10.0.0.2 ping statistics --- 00:27:20.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.988 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:27:20.988 00:27:20.988 --- 10.0.0.1 ping statistics --- 00:27:20.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.988 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:20.988 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:20.989 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:20.989 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:20.989 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:20.989 11:07:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.286 Waiting for block devices as requested 00:27:24.286 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.286 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.286 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.286 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:24.286 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:24.286 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:24.547 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.547 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:24.547 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:24.807 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.807 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.807 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:25.067 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:25.067 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:25.067 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:25.327 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:25.327 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:25.588 No valid GPT data, bailing 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:25.588 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:25.850 00:27:25.850 Discovery Log Number of Records 2, Generation counter 2 00:27:25.850 =====Discovery Log Entry 0====== 00:27:25.850 trtype: tcp 00:27:25.850 adrfam: ipv4 00:27:25.850 subtype: current discovery subsystem 00:27:25.850 treq: not specified, sq flow control disable supported 00:27:25.850 portid: 1 00:27:25.850 trsvcid: 4420 00:27:25.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:25.850 traddr: 10.0.0.1 00:27:25.850 eflags: none 00:27:25.850 sectype: none 00:27:25.850 =====Discovery Log Entry 1====== 00:27:25.850 trtype: tcp 00:27:25.850 adrfam: ipv4 00:27:25.850 subtype: nvme subsystem 00:27:25.850 treq: not specified, sq flow control disable supported 00:27:25.850 portid: 1 00:27:25.850 trsvcid: 4420 00:27:25.850 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:25.850 traddr: 10.0.0.1 00:27:25.850 eflags: none 00:27:25.850 sectype: none 00:27:25.850 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:25.850 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:25.850 ===================================================== 00:27:25.850 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:25.850 ===================================================== 00:27:25.850 Controller Capabilities/Features 00:27:25.850 ================================ 00:27:25.850 Vendor ID: 0000 00:27:25.850 Subsystem Vendor ID: 0000 00:27:25.850 Serial Number: 671b5e73e9f8731698ae 00:27:25.850 Model Number: Linux 00:27:25.850 Firmware Version: 6.8.9-20 00:27:25.850 Recommended Arb Burst: 0 00:27:25.850 IEEE OUI Identifier: 00 00 00 00:27:25.850 Multi-path I/O 00:27:25.850 May have multiple subsystem ports: No 00:27:25.850 May have multiple controllers: No 00:27:25.850 Associated with SR-IOV VF: No 00:27:25.850 Max Data Transfer Size: Unlimited 00:27:25.850 Max Number of Namespaces: 0 00:27:25.850 Max Number of I/O Queues: 1024 00:27:25.850 NVMe Specification Version (VS): 1.3 00:27:25.850 NVMe Specification Version (Identify): 1.3 00:27:25.850 Maximum Queue Entries: 1024 00:27:25.850 Contiguous Queues Required: No 00:27:25.850 Arbitration Mechanisms Supported 00:27:25.850 Weighted Round Robin: Not Supported 00:27:25.850 Vendor Specific: Not Supported 00:27:25.850 Reset Timeout: 7500 ms 00:27:25.850 Doorbell Stride: 4 bytes 00:27:25.850 NVM Subsystem Reset: Not Supported 00:27:25.850 Command Sets Supported 00:27:25.850 NVM Command Set: Supported 00:27:25.850 Boot Partition: Not Supported 00:27:25.850 Memory Page Size Minimum: 4096 bytes 00:27:25.850 Memory Page Size Maximum: 4096 bytes 00:27:25.850 Persistent Memory Region: Not Supported 00:27:25.850 Optional Asynchronous Events Supported 00:27:25.850 Namespace Attribute Notices: Not Supported 00:27:25.850 Firmware Activation Notices: Not Supported 00:27:25.850 ANA Change Notices: Not Supported 00:27:25.850 PLE Aggregate Log Change Notices: Not Supported 00:27:25.850 LBA Status Info Alert Notices: Not Supported 00:27:25.850 EGE Aggregate Log Change Notices: Not Supported 00:27:25.850 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.850 Zone Descriptor Change Notices: Not Supported 00:27:25.850 Discovery Log Change Notices: Supported 00:27:25.850 Controller Attributes 00:27:25.850 128-bit Host Identifier: Not Supported 00:27:25.850 Non-Operational Permissive Mode: Not Supported 00:27:25.850 NVM Sets: Not Supported 00:27:25.850 Read Recovery Levels: Not Supported 00:27:25.850 Endurance Groups: Not Supported 00:27:25.850 Predictable Latency Mode: Not Supported 00:27:25.850 Traffic Based Keep ALive: Not Supported 00:27:25.850 Namespace Granularity: Not Supported 00:27:25.850 SQ Associations: Not Supported 00:27:25.850 UUID List: Not Supported 00:27:25.850 Multi-Domain Subsystem: Not Supported 00:27:25.850 Fixed Capacity Management: Not Supported 00:27:25.850 Variable Capacity Management: Not Supported 00:27:25.850 Delete Endurance Group: Not Supported 00:27:25.850 Delete NVM Set: Not Supported 00:27:25.850 Extended LBA Formats Supported: Not Supported 00:27:25.850 Flexible Data Placement Supported: Not Supported 00:27:25.850 00:27:25.850 Controller Memory Buffer Support 00:27:25.850 ================================ 00:27:25.850 Supported: No 00:27:25.850 00:27:25.850 Persistent Memory Region Support 00:27:25.850 ================================ 00:27:25.850 Supported: No 00:27:25.850 00:27:25.850 Admin Command Set Attributes 00:27:25.850 ============================ 00:27:25.850 Security Send/Receive: Not Supported 00:27:25.850 Format NVM: Not Supported 00:27:25.850 Firmware Activate/Download: Not Supported 00:27:25.850 Namespace Management: Not Supported 00:27:25.850 Device Self-Test: Not Supported 00:27:25.850 Directives: Not Supported 00:27:25.850 NVMe-MI: Not Supported 00:27:25.850 Virtualization Management: Not Supported 00:27:25.850 Doorbell Buffer Config: Not Supported 00:27:25.850 Get LBA Status Capability: Not Supported 00:27:25.850 Command & Feature Lockdown Capability: Not Supported 00:27:25.850 Abort Command Limit: 1 00:27:25.850 Async Event Request Limit: 1 00:27:25.850 Number of Firmware Slots: N/A 00:27:25.850 Firmware Slot 1 Read-Only: N/A 00:27:25.850 Firmware Activation Without Reset: N/A 00:27:25.850 Multiple Update Detection Support: N/A 00:27:25.850 Firmware Update Granularity: No Information Provided 00:27:25.850 Per-Namespace SMART Log: No 00:27:25.850 Asymmetric Namespace Access Log Page: Not Supported 00:27:25.850 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:25.850 Command Effects Log Page: Not Supported 00:27:25.850 Get Log Page Extended Data: Supported 00:27:25.850 Telemetry Log Pages: Not Supported 00:27:25.850 Persistent Event Log Pages: Not Supported 00:27:25.850 Supported Log Pages Log Page: May Support 00:27:25.850 Commands Supported & Effects Log Page: Not Supported 00:27:25.851 Feature Identifiers & Effects Log Page:May Support 00:27:25.851 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.851 Data Area 4 for Telemetry Log: Not Supported 00:27:25.851 Error Log Page Entries Supported: 1 00:27:25.851 Keep Alive: Not Supported 00:27:25.851 00:27:25.851 NVM Command Set Attributes 00:27:25.851 ========================== 00:27:25.851 Submission Queue Entry Size 00:27:25.851 Max: 1 00:27:25.851 Min: 1 00:27:25.851 Completion Queue Entry Size 00:27:25.851 Max: 1 00:27:25.851 Min: 1 00:27:25.851 Number of Namespaces: 0 00:27:25.851 Compare Command: Not Supported 00:27:25.851 Write Uncorrectable Command: Not Supported 00:27:25.851 Dataset Management Command: Not Supported 00:27:25.851 Write Zeroes Command: Not Supported 00:27:25.851 Set Features Save Field: Not Supported 00:27:25.851 Reservations: Not Supported 00:27:25.851 Timestamp: Not Supported 00:27:25.851 Copy: Not Supported 00:27:25.851 Volatile Write Cache: Not Present 00:27:25.851 Atomic Write Unit (Normal): 1 00:27:25.851 Atomic Write Unit (PFail): 1 00:27:25.851 Atomic Compare & Write Unit: 1 00:27:25.851 Fused Compare & Write: Not Supported 00:27:25.851 Scatter-Gather List 00:27:25.851 SGL Command Set: Supported 00:27:25.851 SGL Keyed: Not Supported 00:27:25.851 SGL Bit Bucket Descriptor: Not Supported 00:27:25.851 SGL Metadata Pointer: Not Supported 00:27:25.851 Oversized SGL: Not Supported 00:27:25.851 SGL Metadata Address: Not Supported 00:27:25.851 SGL Offset: Supported 00:27:25.851 Transport SGL Data Block: Not Supported 00:27:25.851 Replay Protected Memory Block: Not Supported 00:27:25.851 00:27:25.851 Firmware Slot Information 00:27:25.851 ========================= 00:27:25.851 Active slot: 0 00:27:25.851 00:27:25.851 00:27:25.851 Error Log 00:27:25.851 ========= 00:27:25.851 00:27:25.851 Active Namespaces 00:27:25.851 ================= 00:27:25.851 Discovery Log Page 00:27:25.851 ================== 00:27:25.851 Generation Counter: 2 00:27:25.851 Number of Records: 2 00:27:25.851 Record Format: 0 00:27:25.851 00:27:25.851 Discovery Log Entry 0 00:27:25.851 ---------------------- 00:27:25.851 Transport Type: 3 (TCP) 00:27:25.851 Address Family: 1 (IPv4) 00:27:25.851 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:25.851 Entry Flags: 00:27:25.851 Duplicate Returned Information: 0 00:27:25.851 Explicit Persistent Connection Support for Discovery: 0 00:27:25.851 Transport Requirements: 00:27:25.851 Secure Channel: Not Specified 00:27:25.851 Port ID: 1 (0x0001) 00:27:25.851 Controller ID: 65535 (0xffff) 00:27:25.851 Admin Max SQ Size: 32 00:27:25.851 Transport Service Identifier: 4420 00:27:25.851 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:25.851 Transport Address: 10.0.0.1 00:27:25.851 Discovery Log Entry 1 00:27:25.851 ---------------------- 00:27:25.851 Transport Type: 3 (TCP) 00:27:25.851 Address Family: 1 (IPv4) 00:27:25.851 Subsystem Type: 2 (NVM Subsystem) 00:27:25.851 Entry Flags: 00:27:25.851 Duplicate Returned Information: 0 00:27:25.851 Explicit Persistent Connection Support for Discovery: 0 00:27:25.851 Transport Requirements: 00:27:25.851 Secure Channel: Not Specified 00:27:25.851 Port ID: 1 (0x0001) 00:27:25.851 Controller ID: 65535 (0xffff) 00:27:25.851 Admin Max SQ Size: 32 00:27:25.851 Transport Service Identifier: 4420 00:27:25.851 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:25.851 Transport Address: 10.0.0.1 00:27:25.851 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:26.113 get_feature(0x01) failed 00:27:26.113 get_feature(0x02) failed 00:27:26.113 get_feature(0x04) failed 00:27:26.113 ===================================================== 00:27:26.113 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:26.113 ===================================================== 00:27:26.113 Controller Capabilities/Features 00:27:26.113 ================================ 00:27:26.113 Vendor ID: 0000 00:27:26.113 Subsystem Vendor ID: 0000 00:27:26.113 Serial Number: 7970bbf9f5cbfb6091c8 00:27:26.113 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:26.113 Firmware Version: 6.8.9-20 00:27:26.113 Recommended Arb Burst: 6 00:27:26.113 IEEE OUI Identifier: 00 00 00 00:27:26.113 Multi-path I/O 00:27:26.113 May have multiple subsystem ports: Yes 00:27:26.113 May have multiple controllers: Yes 00:27:26.113 Associated with SR-IOV VF: No 00:27:26.113 Max Data Transfer Size: Unlimited 00:27:26.113 Max Number of Namespaces: 1024 00:27:26.113 Max Number of I/O Queues: 128 00:27:26.113 NVMe Specification Version (VS): 1.3 00:27:26.113 NVMe Specification Version (Identify): 1.3 00:27:26.113 Maximum Queue Entries: 1024 00:27:26.113 Contiguous Queues Required: No 00:27:26.113 Arbitration Mechanisms Supported 00:27:26.113 Weighted Round Robin: Not Supported 00:27:26.113 Vendor Specific: Not Supported 00:27:26.113 Reset Timeout: 7500 ms 00:27:26.113 Doorbell Stride: 4 bytes 00:27:26.113 NVM Subsystem Reset: Not Supported 00:27:26.113 Command Sets Supported 00:27:26.113 NVM Command Set: Supported 00:27:26.113 Boot Partition: Not Supported 00:27:26.113 Memory Page Size Minimum: 4096 bytes 00:27:26.113 Memory Page Size Maximum: 4096 bytes 00:27:26.113 Persistent Memory Region: Not Supported 00:27:26.113 Optional Asynchronous Events Supported 00:27:26.113 Namespace Attribute Notices: Supported 00:27:26.113 Firmware Activation Notices: Not Supported 00:27:26.113 ANA Change Notices: Supported 00:27:26.113 PLE Aggregate Log Change Notices: Not Supported 00:27:26.113 LBA Status Info Alert Notices: Not Supported 00:27:26.113 EGE Aggregate Log Change Notices: Not Supported 00:27:26.113 Normal NVM Subsystem Shutdown event: Not Supported 00:27:26.113 Zone Descriptor Change Notices: Not Supported 00:27:26.113 Discovery Log Change Notices: Not Supported 00:27:26.113 Controller Attributes 00:27:26.113 128-bit Host Identifier: Supported 00:27:26.113 Non-Operational Permissive Mode: Not Supported 00:27:26.113 NVM Sets: Not Supported 00:27:26.113 Read Recovery Levels: Not Supported 00:27:26.113 Endurance Groups: Not Supported 00:27:26.113 Predictable Latency Mode: Not Supported 00:27:26.113 Traffic Based Keep ALive: Supported 00:27:26.113 Namespace Granularity: Not Supported 00:27:26.113 SQ Associations: Not Supported 00:27:26.113 UUID List: Not Supported 00:27:26.113 Multi-Domain Subsystem: Not Supported 00:27:26.113 Fixed Capacity Management: Not Supported 00:27:26.113 Variable Capacity Management: Not Supported 00:27:26.113 Delete Endurance Group: Not Supported 00:27:26.113 Delete NVM Set: Not Supported 00:27:26.113 Extended LBA Formats Supported: Not Supported 00:27:26.113 Flexible Data Placement Supported: Not Supported 00:27:26.113 00:27:26.113 Controller Memory Buffer Support 00:27:26.113 ================================ 00:27:26.113 Supported: No 00:27:26.113 00:27:26.113 Persistent Memory Region Support 00:27:26.113 ================================ 00:27:26.113 Supported: No 00:27:26.113 00:27:26.113 Admin Command Set Attributes 00:27:26.113 ============================ 00:27:26.114 Security Send/Receive: Not Supported 00:27:26.114 Format NVM: Not Supported 00:27:26.114 Firmware Activate/Download: Not Supported 00:27:26.114 Namespace Management: Not Supported 00:27:26.114 Device Self-Test: Not Supported 00:27:26.114 Directives: Not Supported 00:27:26.114 NVMe-MI: Not Supported 00:27:26.114 Virtualization Management: Not Supported 00:27:26.114 Doorbell Buffer Config: Not Supported 00:27:26.114 Get LBA Status Capability: Not Supported 00:27:26.114 Command & Feature Lockdown Capability: Not Supported 00:27:26.114 Abort Command Limit: 4 00:27:26.114 Async Event Request Limit: 4 00:27:26.114 Number of Firmware Slots: N/A 00:27:26.114 Firmware Slot 1 Read-Only: N/A 00:27:26.114 Firmware Activation Without Reset: N/A 00:27:26.114 Multiple Update Detection Support: N/A 00:27:26.114 Firmware Update Granularity: No Information Provided 00:27:26.114 Per-Namespace SMART Log: Yes 00:27:26.114 Asymmetric Namespace Access Log Page: Supported 00:27:26.114 ANA Transition Time : 10 sec 00:27:26.114 00:27:26.114 Asymmetric Namespace Access Capabilities 00:27:26.114 ANA Optimized State : Supported 00:27:26.114 ANA Non-Optimized State : Supported 00:27:26.114 ANA Inaccessible State : Supported 00:27:26.114 ANA Persistent Loss State : Supported 00:27:26.114 ANA Change State : Supported 00:27:26.114 ANAGRPID is not changed : No 00:27:26.114 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:26.114 00:27:26.114 ANA Group Identifier Maximum : 128 00:27:26.114 Number of ANA Group Identifiers : 128 00:27:26.114 Max Number of Allowed Namespaces : 1024 00:27:26.114 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:26.114 Command Effects Log Page: Supported 00:27:26.114 Get Log Page Extended Data: Supported 00:27:26.114 Telemetry Log Pages: Not Supported 00:27:26.114 Persistent Event Log Pages: Not Supported 00:27:26.114 Supported Log Pages Log Page: May Support 00:27:26.114 Commands Supported & Effects Log Page: Not Supported 00:27:26.114 Feature Identifiers & Effects Log Page:May Support 00:27:26.114 NVMe-MI Commands & Effects Log Page: May Support 00:27:26.114 Data Area 4 for Telemetry Log: Not Supported 00:27:26.114 Error Log Page Entries Supported: 128 00:27:26.114 Keep Alive: Supported 00:27:26.114 Keep Alive Granularity: 1000 ms 00:27:26.114 00:27:26.114 NVM Command Set Attributes 00:27:26.114 ========================== 00:27:26.114 Submission Queue Entry Size 00:27:26.114 Max: 64 00:27:26.114 Min: 64 00:27:26.114 Completion Queue Entry Size 00:27:26.114 Max: 16 00:27:26.114 Min: 16 00:27:26.114 Number of Namespaces: 1024 00:27:26.114 Compare Command: Not Supported 00:27:26.114 Write Uncorrectable Command: Not Supported 00:27:26.114 Dataset Management Command: Supported 00:27:26.114 Write Zeroes Command: Supported 00:27:26.114 Set Features Save Field: Not Supported 00:27:26.114 Reservations: Not Supported 00:27:26.114 Timestamp: Not Supported 00:27:26.114 Copy: Not Supported 00:27:26.114 Volatile Write Cache: Present 00:27:26.114 Atomic Write Unit (Normal): 1 00:27:26.114 Atomic Write Unit (PFail): 1 00:27:26.114 Atomic Compare & Write Unit: 1 00:27:26.114 Fused Compare & Write: Not Supported 00:27:26.114 Scatter-Gather List 00:27:26.114 SGL Command Set: Supported 00:27:26.114 SGL Keyed: Not Supported 00:27:26.114 SGL Bit Bucket Descriptor: Not Supported 00:27:26.114 SGL Metadata Pointer: Not Supported 00:27:26.114 Oversized SGL: Not Supported 00:27:26.114 SGL Metadata Address: Not Supported 00:27:26.114 SGL Offset: Supported 00:27:26.114 Transport SGL Data Block: Not Supported 00:27:26.114 Replay Protected Memory Block: Not Supported 00:27:26.114 00:27:26.114 Firmware Slot Information 00:27:26.114 ========================= 00:27:26.114 Active slot: 0 00:27:26.114 00:27:26.114 Asymmetric Namespace Access 00:27:26.114 =========================== 00:27:26.114 Change Count : 0 00:27:26.114 Number of ANA Group Descriptors : 1 00:27:26.114 ANA Group Descriptor : 0 00:27:26.114 ANA Group ID : 1 00:27:26.114 Number of NSID Values : 1 00:27:26.114 Change Count : 0 00:27:26.114 ANA State : 1 00:27:26.114 Namespace Identifier : 1 00:27:26.114 00:27:26.114 Commands Supported and Effects 00:27:26.114 ============================== 00:27:26.114 Admin Commands 00:27:26.114 -------------- 00:27:26.114 Get Log Page (02h): Supported 00:27:26.114 Identify (06h): Supported 00:27:26.114 Abort (08h): Supported 00:27:26.114 Set Features (09h): Supported 00:27:26.114 Get Features (0Ah): Supported 00:27:26.114 Asynchronous Event Request (0Ch): Supported 00:27:26.114 Keep Alive (18h): Supported 00:27:26.114 I/O Commands 00:27:26.114 ------------ 00:27:26.114 Flush (00h): Supported 00:27:26.114 Write (01h): Supported LBA-Change 00:27:26.114 Read (02h): Supported 00:27:26.114 Write Zeroes (08h): Supported LBA-Change 00:27:26.114 Dataset Management (09h): Supported 00:27:26.114 00:27:26.114 Error Log 00:27:26.114 ========= 00:27:26.114 Entry: 0 00:27:26.114 Error Count: 0x3 00:27:26.114 Submission Queue Id: 0x0 00:27:26.114 Command Id: 0x5 00:27:26.114 Phase Bit: 0 00:27:26.114 Status Code: 0x2 00:27:26.114 Status Code Type: 0x0 00:27:26.114 Do Not Retry: 1 00:27:26.114 Error Location: 0x28 00:27:26.114 LBA: 0x0 00:27:26.114 Namespace: 0x0 00:27:26.114 Vendor Log Page: 0x0 00:27:26.114 ----------- 00:27:26.114 Entry: 1 00:27:26.114 Error Count: 0x2 00:27:26.114 Submission Queue Id: 0x0 00:27:26.114 Command Id: 0x5 00:27:26.114 Phase Bit: 0 00:27:26.114 Status Code: 0x2 00:27:26.114 Status Code Type: 0x0 00:27:26.114 Do Not Retry: 1 00:27:26.114 Error Location: 0x28 00:27:26.114 LBA: 0x0 00:27:26.114 Namespace: 0x0 00:27:26.114 Vendor Log Page: 0x0 00:27:26.114 ----------- 00:27:26.114 Entry: 2 00:27:26.114 Error Count: 0x1 00:27:26.114 Submission Queue Id: 0x0 00:27:26.114 Command Id: 0x4 00:27:26.114 Phase Bit: 0 00:27:26.114 Status Code: 0x2 00:27:26.114 Status Code Type: 0x0 00:27:26.114 Do Not Retry: 1 00:27:26.114 Error Location: 0x28 00:27:26.114 LBA: 0x0 00:27:26.114 Namespace: 0x0 00:27:26.114 Vendor Log Page: 0x0 00:27:26.114 00:27:26.114 Number of Queues 00:27:26.114 ================ 00:27:26.114 Number of I/O Submission Queues: 128 00:27:26.114 Number of I/O Completion Queues: 128 00:27:26.114 00:27:26.114 ZNS Specific Controller Data 00:27:26.114 ============================ 00:27:26.114 Zone Append Size Limit: 0 00:27:26.114 00:27:26.114 00:27:26.114 Active Namespaces 00:27:26.114 ================= 00:27:26.114 get_feature(0x05) failed 00:27:26.114 Namespace ID:1 00:27:26.114 Command Set Identifier: NVM (00h) 00:27:26.114 Deallocate: Supported 00:27:26.114 Deallocated/Unwritten Error: Not Supported 00:27:26.114 Deallocated Read Value: Unknown 00:27:26.114 Deallocate in Write Zeroes: Not Supported 00:27:26.114 Deallocated Guard Field: 0xFFFF 00:27:26.114 Flush: Supported 00:27:26.114 Reservation: Not Supported 00:27:26.114 Namespace Sharing Capabilities: Multiple Controllers 00:27:26.114 Size (in LBAs): 3750748848 (1788GiB) 00:27:26.114 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:26.114 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:26.114 UUID: ad2cd9d2-cc2e-42c7-a265-13bd7b1f78d3 00:27:26.114 Thin Provisioning: Not Supported 00:27:26.114 Per-NS Atomic Units: Yes 00:27:26.114 Atomic Write Unit (Normal): 8 00:27:26.114 Atomic Write Unit (PFail): 8 00:27:26.114 Preferred Write Granularity: 8 00:27:26.114 Atomic Compare & Write Unit: 8 00:27:26.114 Atomic Boundary Size (Normal): 0 00:27:26.114 Atomic Boundary Size (PFail): 0 00:27:26.114 Atomic Boundary Offset: 0 00:27:26.114 NGUID/EUI64 Never Reused: No 00:27:26.114 ANA group ID: 1 00:27:26.114 Namespace Write Protected: No 00:27:26.114 Number of LBA Formats: 1 00:27:26.114 Current LBA Format: LBA Format #00 00:27:26.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:26.114 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.114 rmmod nvme_tcp 00:27:26.114 rmmod nvme_fabrics 00:27:26.114 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.115 11:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:28.659 11:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.958 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.958 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:32.529 00:27:32.529 real 0m19.689s 00:27:32.530 user 0m5.437s 00:27:32.530 sys 0m11.293s 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.530 ************************************ 00:27:32.530 END TEST nvmf_identify_kernel_target 00:27:32.530 ************************************ 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.530 ************************************ 00:27:32.530 START TEST nvmf_auth_host 00:27:32.530 ************************************ 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.530 * Looking for test storage... 00:27:32.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:32.530 11:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.530 --rc genhtml_branch_coverage=1 00:27:32.530 --rc genhtml_function_coverage=1 00:27:32.530 --rc genhtml_legend=1 00:27:32.530 --rc geninfo_all_blocks=1 00:27:32.530 --rc geninfo_unexecuted_blocks=1 00:27:32.530 00:27:32.530 ' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.530 --rc genhtml_branch_coverage=1 00:27:32.530 --rc genhtml_function_coverage=1 00:27:32.530 --rc genhtml_legend=1 00:27:32.530 --rc geninfo_all_blocks=1 00:27:32.530 --rc geninfo_unexecuted_blocks=1 00:27:32.530 00:27:32.530 ' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.530 --rc genhtml_branch_coverage=1 00:27:32.530 --rc genhtml_function_coverage=1 00:27:32.530 --rc genhtml_legend=1 00:27:32.530 --rc geninfo_all_blocks=1 00:27:32.530 --rc geninfo_unexecuted_blocks=1 00:27:32.530 00:27:32.530 ' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.530 --rc genhtml_branch_coverage=1 00:27:32.530 --rc genhtml_function_coverage=1 00:27:32.530 --rc genhtml_legend=1 00:27:32.530 --rc geninfo_all_blocks=1 00:27:32.530 --rc geninfo_unexecuted_blocks=1 00:27:32.530 00:27:32.530 ' 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.530 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.791 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.792 11:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.931 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:40.932 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:40.932 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:40.932 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:40.932 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:27:40.932 00:27:40.932 --- 10.0.0.2 ping statistics --- 00:27:40.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.932 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:27:40.932 00:27:40.932 --- 10.0.0.1 ping statistics --- 00:27:40.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.932 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=546183 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 546183 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 546183 ']' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.932 11:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3fbdeef39160ed0e96387635a86c3343 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gUI 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3fbdeef39160ed0e96387635a86c3343 0 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3fbdeef39160ed0e96387635a86c3343 0 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3fbdeef39160ed0e96387635a86c3343 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gUI 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gUI 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gUI 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc183b39a162bd34e55f67ae67a1a01f3ee096342eaf28eeb4763d6b4eeb7327 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9is 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc183b39a162bd34e55f67ae67a1a01f3ee096342eaf28eeb4763d6b4eeb7327 3 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc183b39a162bd34e55f67ae67a1a01f3ee096342eaf28eeb4763d6b4eeb7327 3 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc183b39a162bd34e55f67ae67a1a01f3ee096342eaf28eeb4763d6b4eeb7327 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9is 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9is 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.9is 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.194 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5f7fd1be40f22b198fb336d9161984f3c1e4b7813b342550 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.khW 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5f7fd1be40f22b198fb336d9161984f3c1e4b7813b342550 0 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5f7fd1be40f22b198fb336d9161984f3c1e4b7813b342550 0 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5f7fd1be40f22b198fb336d9161984f3c1e4b7813b342550 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:41.195 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.456 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.khW 00:27:41.456 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.khW 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.khW 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2c49a7ecdbc685f11f2e251770bed0ff80d39fb6a80e98c3 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3ol 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2c49a7ecdbc685f11f2e251770bed0ff80d39fb6a80e98c3 2 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2c49a7ecdbc685f11f2e251770bed0ff80d39fb6a80e98c3 2 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2c49a7ecdbc685f11f2e251770bed0ff80d39fb6a80e98c3 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3ol 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3ol 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3ol 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=448dcbb4f79db7e12e1f4eeca7d0652c 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OgL 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 448dcbb4f79db7e12e1f4eeca7d0652c 1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 448dcbb4f79db7e12e1f4eeca7d0652c 1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=448dcbb4f79db7e12e1f4eeca7d0652c 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OgL 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OgL 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.OgL 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0c12d6415e9ab13074eda07d238b7d3 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.b34 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0c12d6415e9ab13074eda07d238b7d3 1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0c12d6415e9ab13074eda07d238b7d3 1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0c12d6415e9ab13074eda07d238b7d3 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.b34 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.b34 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.b34 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=786204ed755ab136b50cda99c02652231d2958e226f0d895 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bFd 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 786204ed755ab136b50cda99c02652231d2958e226f0d895 2 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 786204ed755ab136b50cda99c02652231d2958e226f0d895 2 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=786204ed755ab136b50cda99c02652231d2958e226f0d895 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:41.457 11:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bFd 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bFd 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bFd 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f29eb5a75f40e25b153325e2b7cb4062 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Buc 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f29eb5a75f40e25b153325e2b7cb4062 0 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f29eb5a75f40e25b153325e2b7cb4062 0 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f29eb5a75f40e25b153325e2b7cb4062 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Buc 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Buc 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Buc 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d4cd5e74430498549eda551e986bb58ccf1f90a3f753e2cbd02c229431de914c 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Tqu 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d4cd5e74430498549eda551e986bb58ccf1f90a3f753e2cbd02c229431de914c 3 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d4cd5e74430498549eda551e986bb58ccf1f90a3f753e2cbd02c229431de914c 3 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d4cd5e74430498549eda551e986bb58ccf1f90a3f753e2cbd02c229431de914c 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Tqu 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Tqu 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Tqu 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 546183 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 546183 ']' 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:41.719 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.720 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:41.720 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gUI 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.9is ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9is 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.khW 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3ol ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3ol 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.OgL 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.b34 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.b34 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bFd 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Buc ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Buc 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Tqu 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:41.981 11:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:46.189 Waiting for block devices as requested 00:27:46.189 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:46.189 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:46.189 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:46.189 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:46.190 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:46.190 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:46.190 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:46.190 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:46.190 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:46.190 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:46.450 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:46.450 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:46.450 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:46.450 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:46.710 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:46.710 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:46.710 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:47.652 No valid GPT data, bailing 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:47.652 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:47.913 00:27:47.913 Discovery Log Number of Records 2, Generation counter 2 00:27:47.913 =====Discovery Log Entry 0====== 00:27:47.913 trtype: tcp 00:27:47.913 adrfam: ipv4 00:27:47.913 subtype: current discovery subsystem 00:27:47.913 treq: not specified, sq flow control disable supported 00:27:47.913 portid: 1 00:27:47.913 trsvcid: 4420 00:27:47.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:47.913 traddr: 10.0.0.1 00:27:47.913 eflags: none 00:27:47.913 sectype: none 00:27:47.913 =====Discovery Log Entry 1====== 00:27:47.913 trtype: tcp 00:27:47.913 adrfam: ipv4 00:27:47.913 subtype: nvme subsystem 00:27:47.913 treq: not specified, sq flow control disable supported 00:27:47.913 portid: 1 00:27:47.913 trsvcid: 4420 00:27:47.913 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:47.913 traddr: 10.0.0.1 00:27:47.913 eflags: none 00:27:47.913 sectype: none 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.913 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.914 nvme0n1 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.914 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.175 nvme0n1 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.175 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.436 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 nvme0n1 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.699 11:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 nvme0n1 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.699 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 nvme0n1 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.961 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.223 nvme0n1 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.223 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.484 nvme0n1 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.484 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.485 11:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.746 nvme0n1 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.746 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.006 nvme0n1 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.006 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.007 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.267 nvme0n1 00:27:50.267 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.267 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.268 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.529 nvme0n1 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.529 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.530 11:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.791 nvme0n1 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.791 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.052 nvme0n1 00:27:51.052 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.314 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.576 nvme0n1 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.576 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.577 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.577 11:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.577 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.837 nvme0n1 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.837 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.838 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.098 nvme0n1 00:27:52.098 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.098 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.098 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.098 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.098 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.098 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.359 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.360 11:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.620 nvme0n1 00:27:52.620 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.620 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.620 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.620 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.620 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.620 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.881 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.882 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.142 nvme0n1 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.142 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.403 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.404 11:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.664 nvme0n1 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.664 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.924 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.185 nvme0n1 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.185 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.186 11:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.758 nvme0n1 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.758 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.329 nvme0n1 00:27:55.329 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.329 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.329 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.329 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.329 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.329 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.590 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.591 11:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.163 nvme0n1 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:56.163 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.164 11:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.107 nvme0n1 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.107 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.108 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.679 nvme0n1 00:27:57.679 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.679 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.679 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.679 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.679 11:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.679 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.680 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.251 nvme0n1 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.252 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.513 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.514 nvme0n1 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.514 11:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.514 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.775 nvme0n1 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.775 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.776 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.776 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.776 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.776 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.036 nvme0n1 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.036 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 nvme0n1 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.297 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.298 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.559 nvme0n1 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.559 11:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.821 nvme0n1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.821 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.082 nvme0n1 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.082 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.343 nvme0n1 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.343 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.604 nvme0n1 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.604 11:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.604 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.865 nvme0n1 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.865 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.127 nvme0n1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.127 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.390 nvme0n1 00:28:01.390 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.390 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.390 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.390 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.390 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.390 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.650 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.651 11:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.911 nvme0n1 00:28:01.911 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.911 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.911 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.911 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.911 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.912 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.173 nvme0n1 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.173 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.434 nvme0n1 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.434 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.696 11:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.957 nvme0n1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.958 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.530 nvme0n1 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.530 11:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.102 nvme0n1 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.102 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.363 nvme0n1 00:28:04.363 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.363 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.363 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.363 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.363 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.363 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.624 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.624 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.625 11:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.886 nvme0n1 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.886 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:05.147 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.148 11:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.717 nvme0n1 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.717 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.718 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.288 nvme0n1 00:28:06.288 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.288 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.288 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.288 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.288 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:06.549 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.550 11:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.121 nvme0n1 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.121 11:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.061 nvme0n1 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.061 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.062 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.633 nvme0n1 00:28:08.633 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.633 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.633 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.633 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.633 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.633 11:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.633 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.634 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.894 nvme0n1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.894 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.154 nvme0n1 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:09.154 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.155 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.415 nvme0n1 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.415 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.416 nvme0n1 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.416 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.676 11:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.676 nvme0n1 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.676 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.937 nvme0n1 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.937 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.198 nvme0n1 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.198 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.199 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.199 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.199 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.459 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.460 nvme0n1 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.460 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.721 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.721 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.721 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.721 11:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.721 nvme0n1 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.721 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.982 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.983 nvme0n1 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.983 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.244 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.505 nvme0n1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.505 11:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.766 nvme0n1 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:11.766 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.767 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 nvme0n1 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.027 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.287 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.288 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.288 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.548 nvme0n1 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.548 11:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.809 nvme0n1 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.809 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.380 nvme0n1 00:28:13.380 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.380 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.380 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.381 11:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.642 nvme0n1 00:28:13.642 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.642 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.642 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.642 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.642 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.642 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.903 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.164 nvme0n1 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.164 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.424 11:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.685 nvme0n1 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.685 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.949 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.949 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.949 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.950 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.211 nvme0n1 00:28:15.211 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZiZGVlZjM5MTYwZWQwZTk2Mzg3NjM1YTg2YzMzNDMJXze4: 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMxODNiMzlhMTYyYmQzNGU1NWY2N2FlNjdhMWEwMWYzZWUwOTYzNDJlYWYyOGVlYjQ3NjNkNmI0ZWViNzMyNwNpjzw=: 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.212 11:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.152 nvme0n1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.152 11:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 nvme0n1 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.721 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 nvme0n1 00:28:17.291 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.291 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.291 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.291 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.291 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg2MjA0ZWQ3NTVhYjEzNmI1MGNkYTk5YzAyNjUyMjMxZDI5NThlMjI2ZjBkODk1S02Bmg==: 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI5ZWI1YTc1ZjQwZTI1YjE1MzMyNWUyYjdjYjQwNjKMF5JR: 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.552 11:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.122 nvme0n1 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.122 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRjZDVlNzQ0MzA0OTg1NDllZGE1NTFlOTg2YmI1OGNjZjFmOTBhM2Y3NTNlMmNiZDAyYzIyOTQzMWRlOTE0Y7OOgBk=: 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.123 11:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.064 nvme0n1 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.064 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.064 request: 00:28:19.064 { 00:28:19.065 "name": "nvme0", 00:28:19.065 "trtype": "tcp", 00:28:19.065 "traddr": "10.0.0.1", 00:28:19.065 "adrfam": "ipv4", 00:28:19.065 "trsvcid": "4420", 00:28:19.065 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.065 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.065 "prchk_reftag": false, 00:28:19.065 "prchk_guard": false, 00:28:19.065 "hdgst": false, 00:28:19.065 "ddgst": false, 00:28:19.065 "allow_unrecognized_csi": false, 00:28:19.065 "method": "bdev_nvme_attach_controller", 00:28:19.065 "req_id": 1 00:28:19.065 } 00:28:19.065 Got JSON-RPC error response 00:28:19.065 response: 00:28:19.065 { 00:28:19.065 "code": -5, 00:28:19.065 "message": "Input/output error" 00:28:19.065 } 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.065 request: 00:28:19.065 { 00:28:19.065 "name": "nvme0", 00:28:19.065 "trtype": "tcp", 00:28:19.065 "traddr": "10.0.0.1", 00:28:19.065 "adrfam": "ipv4", 00:28:19.065 "trsvcid": "4420", 00:28:19.065 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.065 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.065 "prchk_reftag": false, 00:28:19.065 "prchk_guard": false, 00:28:19.065 "hdgst": false, 00:28:19.065 "ddgst": false, 00:28:19.065 "dhchap_key": "key2", 00:28:19.065 "allow_unrecognized_csi": false, 00:28:19.065 "method": "bdev_nvme_attach_controller", 00:28:19.065 "req_id": 1 00:28:19.065 } 00:28:19.065 Got JSON-RPC error response 00:28:19.065 response: 00:28:19.065 { 00:28:19.065 "code": -5, 00:28:19.065 "message": "Input/output error" 00:28:19.065 } 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.065 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.334 request: 00:28:19.335 { 00:28:19.335 "name": "nvme0", 00:28:19.335 "trtype": "tcp", 00:28:19.335 "traddr": "10.0.0.1", 00:28:19.335 "adrfam": "ipv4", 00:28:19.335 "trsvcid": "4420", 00:28:19.335 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.335 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.335 "prchk_reftag": false, 00:28:19.335 "prchk_guard": false, 00:28:19.335 "hdgst": false, 00:28:19.335 "ddgst": false, 00:28:19.335 "dhchap_key": "key1", 00:28:19.335 "dhchap_ctrlr_key": "ckey2", 00:28:19.335 "allow_unrecognized_csi": false, 00:28:19.335 "method": "bdev_nvme_attach_controller", 00:28:19.335 "req_id": 1 00:28:19.335 } 00:28:19.335 Got JSON-RPC error response 00:28:19.335 response: 00:28:19.335 { 00:28:19.335 "code": -5, 00:28:19.335 "message": "Input/output error" 00:28:19.335 } 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.335 nvme0n1 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:19.335 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.336 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.602 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.602 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.603 request: 00:28:19.603 { 00:28:19.603 "name": "nvme0", 00:28:19.603 "dhchap_key": "key1", 00:28:19.603 "dhchap_ctrlr_key": "ckey2", 00:28:19.603 "method": "bdev_nvme_set_keys", 00:28:19.603 "req_id": 1 00:28:19.603 } 00:28:19.603 Got JSON-RPC error response 00:28:19.603 response: 00:28:19.603 { 00:28:19.603 "code": -13, 00:28:19.603 "message": "Permission denied" 00:28:19.603 } 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:19.603 11:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:20.544 11:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY3ZmQxYmU0MGYyMmIxOThmYjMzNmQ5MTYxOTg0ZjNjMWU0Yjc4MTNiMzQyNTUwjgXmjw==: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM0OWE3ZWNkYmM2ODVmMTFmMmUyNTE3NzBiZWQwZmY4MGQzOWZiNmE4MGU5OGMzipviLg==: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.929 nvme0n1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ4ZGNiYjRmNzlkYjdlMTJlMWY0ZWVjYTdkMDY1MmNULMOm: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDBjMTJkNjQxNWU5YWIxMzA3NGVkYTA3ZDIzOGI3ZDO5nIql: 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.929 request: 00:28:21.929 { 00:28:21.929 "name": "nvme0", 00:28:21.929 "dhchap_key": "key2", 00:28:21.929 "dhchap_ctrlr_key": "ckey1", 00:28:21.929 "method": "bdev_nvme_set_keys", 00:28:21.929 "req_id": 1 00:28:21.929 } 00:28:21.929 Got JSON-RPC error response 00:28:21.929 response: 00:28:21.929 { 00:28:21.929 "code": -13, 00:28:21.929 "message": "Permission denied" 00:28:21.929 } 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:21.929 11:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.006 rmmod nvme_tcp 00:28:23.006 rmmod nvme_fabrics 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 546183 ']' 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 546183 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 546183 ']' 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 546183 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.006 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 546183 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 546183' 00:28:23.312 killing process with pid 546183 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 546183 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 546183 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.312 11:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:25.223 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.482 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:25.482 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:25.482 11:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:28.785 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.785 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.785 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.785 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.785 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:29.045 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:29.045 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:29.045 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:29.046 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:29.618 11:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gUI /tmp/spdk.key-null.khW /tmp/spdk.key-sha256.OgL /tmp/spdk.key-sha384.bFd /tmp/spdk.key-sha512.Tqu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:29.618 11:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:32.920 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:32.920 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:32.920 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:33.182 00:28:33.182 real 1m0.852s 00:28:33.182 user 0m54.586s 00:28:33.182 sys 0m16.145s 00:28:33.182 11:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:33.182 11:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.182 ************************************ 00:28:33.182 END TEST nvmf_auth_host 00:28:33.182 ************************************ 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.444 ************************************ 00:28:33.444 START TEST nvmf_digest 00:28:33.444 ************************************ 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:33.444 * Looking for test storage... 00:28:33.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:33.444 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.706 --rc genhtml_branch_coverage=1 00:28:33.706 --rc genhtml_function_coverage=1 00:28:33.706 --rc genhtml_legend=1 00:28:33.706 --rc geninfo_all_blocks=1 00:28:33.706 --rc geninfo_unexecuted_blocks=1 00:28:33.706 00:28:33.706 ' 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.706 --rc genhtml_branch_coverage=1 00:28:33.706 --rc genhtml_function_coverage=1 00:28:33.706 --rc genhtml_legend=1 00:28:33.706 --rc geninfo_all_blocks=1 00:28:33.706 --rc geninfo_unexecuted_blocks=1 00:28:33.706 00:28:33.706 ' 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.706 --rc genhtml_branch_coverage=1 00:28:33.706 --rc genhtml_function_coverage=1 00:28:33.706 --rc genhtml_legend=1 00:28:33.706 --rc geninfo_all_blocks=1 00:28:33.706 --rc geninfo_unexecuted_blocks=1 00:28:33.706 00:28:33.706 ' 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.706 --rc genhtml_branch_coverage=1 00:28:33.706 --rc genhtml_function_coverage=1 00:28:33.706 --rc genhtml_legend=1 00:28:33.706 --rc geninfo_all_blocks=1 00:28:33.706 --rc geninfo_unexecuted_blocks=1 00:28:33.706 00:28:33.706 ' 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.706 11:08:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.706 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.707 11:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:41.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:41.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.847 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:41.848 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:41.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:28:41.848 00:28:41.848 --- 10.0.0.2 ping statistics --- 00:28:41.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.848 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:28:41.848 00:28:41.848 --- 10.0.0.1 ping statistics --- 00:28:41.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.848 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.848 ************************************ 00:28:41.848 START TEST nvmf_digest_clean 00:28:41.848 ************************************ 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=563294 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 563294 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 563294 ']' 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:41.848 11:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.849 [2024-11-15 11:09:00.670715] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:41.849 [2024-11-15 11:09:00.670777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.849 [2024-11-15 11:09:00.770736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.849 [2024-11-15 11:09:00.821097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.849 [2024-11-15 11:09:00.821144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.849 [2024-11-15 11:09:00.821153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.849 [2024-11-15 11:09:00.821161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.849 [2024-11-15 11:09:00.821169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.849 [2024-11-15 11:09:00.821942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.111 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.111 null0 00:28:42.111 [2024-11-15 11:09:01.637879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.372 [2024-11-15 11:09:01.662190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=563430 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 563430 /var/tmp/bperf.sock 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 563430 ']' 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.372 11:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.372 [2024-11-15 11:09:01.724034] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:42.372 [2024-11-15 11:09:01.724098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid563430 ] 00:28:42.372 [2024-11-15 11:09:01.815663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.372 [2024-11-15 11:09:01.868626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.316 11:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.887 nvme0n1 00:28:43.887 11:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:43.887 11:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.887 Running I/O for 2 seconds... 00:28:46.213 18864.00 IOPS, 73.69 MiB/s [2024-11-15T10:09:05.740Z] 19557.50 IOPS, 76.40 MiB/s 00:28:46.213 Latency(us) 00:28:46.213 [2024-11-15T10:09:05.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.213 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:46.213 nvme0n1 : 2.01 19571.31 76.45 0.00 0.00 6533.74 3153.92 23046.83 00:28:46.213 [2024-11-15T10:09:05.740Z] =================================================================================================================== 00:28:46.213 [2024-11-15T10:09:05.740Z] Total : 19571.31 76.45 0.00 0.00 6533.74 3153.92 23046.83 00:28:46.213 { 00:28:46.213 "results": [ 00:28:46.213 { 00:28:46.213 "job": "nvme0n1", 00:28:46.213 "core_mask": "0x2", 00:28:46.213 "workload": "randread", 00:28:46.213 "status": "finished", 00:28:46.213 "queue_depth": 128, 00:28:46.213 "io_size": 4096, 00:28:46.213 "runtime": 2.009676, 00:28:46.213 "iops": 19571.313982950487, 00:28:46.213 "mibps": 76.45044524590034, 00:28:46.213 "io_failed": 0, 00:28:46.213 "io_timeout": 0, 00:28:46.213 "avg_latency_us": 6533.743619783721, 00:28:46.213 "min_latency_us": 3153.92, 00:28:46.213 "max_latency_us": 23046.826666666668 00:28:46.213 } 00:28:46.213 ], 00:28:46.213 "core_count": 1 00:28:46.213 } 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:46.213 | select(.opcode=="crc32c") 00:28:46.213 | "\(.module_name) \(.executed)"' 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 563430 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 563430 ']' 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 563430 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 563430 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 563430' 00:28:46.213 killing process with pid 563430 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 563430 00:28:46.213 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.213 00:28:46.213 Latency(us) 00:28:46.213 [2024-11-15T10:09:05.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.213 [2024-11-15T10:09:05.740Z] =================================================================================================================== 00:28:46.213 [2024-11-15T10:09:05.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 563430 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=564302 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 564302 /var/tmp/bperf.sock 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 564302 ']' 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:46.213 11:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.473 [2024-11-15 11:09:05.740730] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:46.473 [2024-11-15 11:09:05.740787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564302 ] 00:28:46.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.473 Zero copy mechanism will not be used. 00:28:46.473 [2024-11-15 11:09:05.822573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.473 [2024-11-15 11:09:05.852175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.044 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:47.044 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:47.044 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:47.044 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:47.044 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.304 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.304 11:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.566 nvme0n1 00:28:47.566 11:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:47.566 11:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.827 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.827 Zero copy mechanism will not be used. 00:28:47.827 Running I/O for 2 seconds... 00:28:49.713 3827.00 IOPS, 478.38 MiB/s [2024-11-15T10:09:09.240Z] 3579.50 IOPS, 447.44 MiB/s 00:28:49.713 Latency(us) 00:28:49.713 [2024-11-15T10:09:09.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.713 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:49.713 nvme0n1 : 2.00 3579.91 447.49 0.00 0.00 4466.27 778.24 13707.95 00:28:49.713 [2024-11-15T10:09:09.240Z] =================================================================================================================== 00:28:49.713 [2024-11-15T10:09:09.240Z] Total : 3579.91 447.49 0.00 0.00 4466.27 778.24 13707.95 00:28:49.713 { 00:28:49.713 "results": [ 00:28:49.713 { 00:28:49.713 "job": "nvme0n1", 00:28:49.713 "core_mask": "0x2", 00:28:49.713 "workload": "randread", 00:28:49.713 "status": "finished", 00:28:49.713 "queue_depth": 16, 00:28:49.713 "io_size": 131072, 00:28:49.713 "runtime": 2.004239, 00:28:49.713 "iops": 3579.912375719662, 00:28:49.713 "mibps": 447.48904696495777, 00:28:49.713 "io_failed": 0, 00:28:49.713 "io_timeout": 0, 00:28:49.713 "avg_latency_us": 4466.272215563298, 00:28:49.713 "min_latency_us": 778.24, 00:28:49.713 "max_latency_us": 13707.946666666667 00:28:49.713 } 00:28:49.713 ], 00:28:49.713 "core_count": 1 00:28:49.714 } 00:28:49.714 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.714 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:49.714 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.714 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.714 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.714 | select(.opcode=="crc32c") 00:28:49.714 | "\(.module_name) \(.executed)"' 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 564302 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 564302 ']' 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 564302 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 564302 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 564302' 00:28:49.975 killing process with pid 564302 00:28:49.975 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 564302 00:28:49.975 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.975 00:28:49.976 Latency(us) 00:28:49.976 [2024-11-15T10:09:09.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.976 [2024-11-15T10:09:09.503Z] =================================================================================================================== 00:28:49.976 [2024-11-15T10:09:09.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 564302 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=565258 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 565258 /var/tmp/bperf.sock 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 565258 ']' 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.976 11:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:50.236 [2024-11-15 11:09:09.522066] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:50.236 [2024-11-15 11:09:09.522120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565258 ] 00:28:50.236 [2024-11-15 11:09:09.606497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.236 [2024-11-15 11:09:09.635803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.809 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.809 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:50.809 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.809 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.809 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:51.070 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.070 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.332 nvme0n1 00:28:51.332 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.332 11:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.593 Running I/O for 2 seconds... 00:28:53.479 30121.00 IOPS, 117.66 MiB/s [2024-11-15T10:09:13.006Z] 30306.00 IOPS, 118.38 MiB/s 00:28:53.479 Latency(us) 00:28:53.479 [2024-11-15T10:09:13.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.479 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.479 nvme0n1 : 2.00 30331.72 118.48 0.00 0.00 4215.58 2020.69 15728.64 00:28:53.479 [2024-11-15T10:09:13.006Z] =================================================================================================================== 00:28:53.479 [2024-11-15T10:09:13.006Z] Total : 30331.72 118.48 0.00 0.00 4215.58 2020.69 15728.64 00:28:53.479 { 00:28:53.479 "results": [ 00:28:53.479 { 00:28:53.479 "job": "nvme0n1", 00:28:53.479 "core_mask": "0x2", 00:28:53.479 "workload": "randwrite", 00:28:53.479 "status": "finished", 00:28:53.479 "queue_depth": 128, 00:28:53.479 "io_size": 4096, 00:28:53.479 "runtime": 2.002524, 00:28:53.479 "iops": 30331.721367634047, 00:28:53.479 "mibps": 118.4832865923205, 00:28:53.479 "io_failed": 0, 00:28:53.479 "io_timeout": 0, 00:28:53.479 "avg_latency_us": 4215.580182197344, 00:28:53.479 "min_latency_us": 2020.6933333333334, 00:28:53.479 "max_latency_us": 15728.64 00:28:53.479 } 00:28:53.479 ], 00:28:53.479 "core_count": 1 00:28:53.479 } 00:28:53.479 11:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:53.479 11:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:53.479 11:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.479 11:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.479 | select(.opcode=="crc32c") 00:28:53.479 | "\(.module_name) \(.executed)"' 00:28:53.479 11:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 565258 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 565258 ']' 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 565258 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 565258 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 565258' 00:28:53.741 killing process with pid 565258 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 565258 00:28:53.741 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.741 00:28:53.741 Latency(us) 00:28:53.741 [2024-11-15T10:09:13.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.741 [2024-11-15T10:09:13.268Z] =================================================================================================================== 00:28:53.741 [2024-11-15T10:09:13.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 565258 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=566228 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 566228 /var/tmp/bperf.sock 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 566228 ']' 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:53.741 11:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.002 [2024-11-15 11:09:13.306267] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:54.002 [2024-11-15 11:09:13.306321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566228 ] 00:28:54.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.002 Zero copy mechanism will not be used. 00:28:54.002 [2024-11-15 11:09:13.389592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.002 [2024-11-15 11:09:13.418976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.574 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.574 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:54.574 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:54.574 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:54.574 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.835 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.836 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.413 nvme0n1 00:28:55.413 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:55.413 11:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.413 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.413 Zero copy mechanism will not be used. 00:28:55.413 Running I/O for 2 seconds... 00:28:57.306 5603.00 IOPS, 700.38 MiB/s [2024-11-15T10:09:16.833Z] 5306.50 IOPS, 663.31 MiB/s 00:28:57.306 Latency(us) 00:28:57.306 [2024-11-15T10:09:16.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.306 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:57.306 nvme0n1 : 2.01 5300.59 662.57 0.00 0.00 3012.25 1249.28 6799.36 00:28:57.306 [2024-11-15T10:09:16.833Z] =================================================================================================================== 00:28:57.306 [2024-11-15T10:09:16.833Z] Total : 5300.59 662.57 0.00 0.00 3012.25 1249.28 6799.36 00:28:57.306 { 00:28:57.306 "results": [ 00:28:57.306 { 00:28:57.306 "job": "nvme0n1", 00:28:57.306 "core_mask": "0x2", 00:28:57.306 "workload": "randwrite", 00:28:57.306 "status": "finished", 00:28:57.306 "queue_depth": 16, 00:28:57.306 "io_size": 131072, 00:28:57.306 "runtime": 2.005813, 00:28:57.306 "iops": 5300.593824050397, 00:28:57.306 "mibps": 662.5742280062997, 00:28:57.306 "io_failed": 0, 00:28:57.306 "io_timeout": 0, 00:28:57.306 "avg_latency_us": 3012.251577627289, 00:28:57.306 "min_latency_us": 1249.28, 00:28:57.306 "max_latency_us": 6799.36 00:28:57.306 } 00:28:57.306 ], 00:28:57.306 "core_count": 1 00:28:57.306 } 00:28:57.306 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:57.306 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:57.306 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:57.306 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:57.306 | select(.opcode=="crc32c") 00:28:57.306 | "\(.module_name) \(.executed)"' 00:28:57.306 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 566228 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 566228 ']' 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 566228 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.567 11:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 566228 00:28:57.567 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:57.567 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:57.567 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 566228' 00:28:57.567 killing process with pid 566228 00:28:57.567 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 566228 00:28:57.567 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.567 00:28:57.567 Latency(us) 00:28:57.567 [2024-11-15T10:09:17.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.567 [2024-11-15T10:09:17.094Z] =================================================================================================================== 00:28:57.567 [2024-11-15T10:09:17.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.567 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 566228 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 563294 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 563294 ']' 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 563294 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 563294 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 563294' 00:28:57.828 killing process with pid 563294 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 563294 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 563294 00:28:57.828 00:28:57.828 real 0m16.723s 00:28:57.828 user 0m33.019s 00:28:57.828 sys 0m3.753s 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:57.828 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.828 ************************************ 00:28:57.828 END TEST nvmf_digest_clean 00:28:57.828 ************************************ 00:28:58.089 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:58.090 ************************************ 00:28:58.090 START TEST nvmf_digest_error 00:28:58.090 ************************************ 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=566954 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 566954 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 566954 ']' 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:58.090 11:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.090 [2024-11-15 11:09:17.474802] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:58.090 [2024-11-15 11:09:17.474857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.090 [2024-11-15 11:09:17.567314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.090 [2024-11-15 11:09:17.599949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.090 [2024-11-15 11:09:17.599978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.090 [2024-11-15 11:09:17.599988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.090 [2024-11-15 11:09:17.599992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.090 [2024-11-15 11:09:17.599996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.090 [2024-11-15 11:09:17.600490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.033 [2024-11-15 11:09:18.314453] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.033 null0 00:28:59.033 [2024-11-15 11:09:18.393254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.033 [2024-11-15 11:09:18.417458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=567287 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 567287 /var/tmp/bperf.sock 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 567287 ']' 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.033 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:59.034 11:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.034 [2024-11-15 11:09:18.471813] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:28:59.034 [2024-11-15 11:09:18.471859] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567287 ] 00:28:59.034 [2024-11-15 11:09:18.555029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.294 [2024-11-15 11:09:18.584723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.865 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:59.865 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:59.865 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.865 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.126 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:00.126 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.126 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.126 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.126 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.126 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.388 nvme0n1 00:29:00.388 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:00.388 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.388 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.388 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.388 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:00.388 11:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:00.388 Running I/O for 2 seconds... 00:29:00.388 [2024-11-15 11:09:19.862996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.863027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.863036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.388 [2024-11-15 11:09:19.871115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.871134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.871141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.388 [2024-11-15 11:09:19.879631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.879649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.879655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.388 [2024-11-15 11:09:19.888618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.888636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.888642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.388 [2024-11-15 11:09:19.898480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.898497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.898504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.388 [2024-11-15 11:09:19.907471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.907488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.907495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.388 [2024-11-15 11:09:19.915844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.388 [2024-11-15 11:09:19.915861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.388 [2024-11-15 11:09:19.915868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.925681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.925699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.925706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.934818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.934836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.942318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.942336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.942342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.952693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.952710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.952721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.961587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.961604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.961610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.970450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.970467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.970473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.979978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.979996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.980002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.987913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.987930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:19.997419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:19.997435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:19.997441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.006139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.006157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.015801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.015818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.015824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.025659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.025676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.025683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.037815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.037836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.037842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.047311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.047329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.047335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.055365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.055382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.055388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.065479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.065496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.065503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.076684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.076701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.076707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.085909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.085925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.085932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.094413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.094429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.094435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.104530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.104547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.104553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.112326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.112342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.112349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.122777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.122794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.122800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.131816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.131833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.131840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.140439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.650 [2024-11-15 11:09:20.140456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-15 11:09:20.140463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-15 11:09:20.149489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.651 [2024-11-15 11:09:20.149506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.651 [2024-11-15 11:09:20.149513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.651 [2024-11-15 11:09:20.158905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.651 [2024-11-15 11:09:20.158921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.651 [2024-11-15 11:09:20.158927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.651 [2024-11-15 11:09:20.167342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.651 [2024-11-15 11:09:20.167359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.651 [2024-11-15 11:09:20.167365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.651 [2024-11-15 11:09:20.176749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.651 [2024-11-15 11:09:20.176765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.651 [2024-11-15 11:09:20.176772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.187280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.187297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.187303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.196482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.196499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.196509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.205185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.205202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.205208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.213567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.213583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.213590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.222963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.222980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.222987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.232196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.232213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.232219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.240871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.240888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.240895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.250053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.250070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.250077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.259464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.269823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.269840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.269847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.279208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.279224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.279230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.286851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.286868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.286874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.296184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.296201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.296207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.305382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.305398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.305405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.313859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.313876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.313882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.322420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.322437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.322444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.332434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.332451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.332458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.341573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.341596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.351157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.351174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.351184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.359373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.359390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.359396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.369128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.369144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.369150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.377603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.377619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.377625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.386592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.386609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.386615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.395085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.395102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.395108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.405866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.405883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-15 11:09:20.405889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-15 11:09:20.414807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.913 [2024-11-15 11:09:20.414823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-15 11:09:20.414830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-15 11:09:20.424170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.914 [2024-11-15 11:09:20.424187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-15 11:09:20.424193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-15 11:09:20.433285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:00.914 [2024-11-15 11:09:20.433305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-15 11:09:20.433312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.442329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.442346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.442352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.451495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.451511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.451518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.460339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.460355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.460362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.468621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.468638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.468644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.478728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.478745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.478752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.486557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.486578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.486584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.494888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.494905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.494912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.505092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.505109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.505115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.514941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.514958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.514965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.524020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.524037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.524044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.532486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.532502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.532509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.541731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.541748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.541754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.550419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.550436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.550442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.560384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.560400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.560407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.569384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.569401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.569407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.577802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.577818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.577824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.587413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.587430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.587439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.596903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.596919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.596925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.604203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.604220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.604226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.613738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.613755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.613761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.622844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.622860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.176 [2024-11-15 11:09:20.622867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.176 [2024-11-15 11:09:20.633180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.176 [2024-11-15 11:09:20.633196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.633203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.643346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.643363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.643369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.652234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.652251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.652257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.660716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.660733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.660741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.669314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.669334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.678820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.678837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.678844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.687356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.687373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.687379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-15 11:09:20.696634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.177 [2024-11-15 11:09:20.696651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-15 11:09:20.696658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.438 [2024-11-15 11:09:20.705811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.438 [2024-11-15 11:09:20.705828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.438 [2024-11-15 11:09:20.705834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.438 [2024-11-15 11:09:20.716232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.438 [2024-11-15 11:09:20.716248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.438 [2024-11-15 11:09:20.716255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.438 [2024-11-15 11:09:20.727857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.438 [2024-11-15 11:09:20.727874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.438 [2024-11-15 11:09:20.727880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.438 [2024-11-15 11:09:20.738206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.438 [2024-11-15 11:09:20.738223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.438 [2024-11-15 11:09:20.738230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.747154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.747171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.747177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.758408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.758424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.758431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.768560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.768581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.768587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.777557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.777576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.777583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.785917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.785933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.785940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.796057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.796073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.796080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.804968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.804985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.804991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.813177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.813194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.813200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.822248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.822264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.822270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.831392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.831409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.831418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.840118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.840143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 27412.00 IOPS, 107.08 MiB/s [2024-11-15T10:09:20.966Z] [2024-11-15 11:09:20.849944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.849961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.849967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.859202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.859219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.859226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.867990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.868007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.868014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.876438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.876455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.876462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.885743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.885761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.885767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.897080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.897097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.897103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.907040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.907057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.907063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.915222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.915240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.915246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.924276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.924293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.924299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.932659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.932682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.942541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.942558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.942570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.952657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.952675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.952681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-15 11:09:20.960106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.439 [2024-11-15 11:09:20.960124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-15 11:09:20.960131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:20.970543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:20.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:20.970571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:20.979466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:20.979483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:20.979489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:20.988769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:20.988786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:20.988796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:20.997451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:20.997468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:20.997475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.006650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.006667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.006673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.017471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.017488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.017495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.027302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.027320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.027326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.035777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.035795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.035801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.045584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.045601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.045608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.054373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.054390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.054397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.064500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.064518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.064524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.072667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.700 [2024-11-15 11:09:21.072691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.700 [2024-11-15 11:09:21.072697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.700 [2024-11-15 11:09:21.081332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.081349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.081356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.090739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.090756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.090762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.100078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.100095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.100102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.108490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.108508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.108514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.116846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.116863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.116869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.126302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.126319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.126326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.134714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.134731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.134737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.143416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.143433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.143439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.152196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.152212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.152219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.162178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.162195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.162201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.170894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.170911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.170917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.179833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.179851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.179857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.188540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.188558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.188568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.197721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.197738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.197744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.206260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.206278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.206284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.215574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.215591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.215598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.701 [2024-11-15 11:09:21.224416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.701 [2024-11-15 11:09:21.224433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-11-15 11:09:21.224443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.232624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.232641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.232648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.241440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.241457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.241463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.250545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.250566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.250573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.259589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.259606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.259613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.269149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.269166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.269172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.278493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.278510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.278516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.286984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.287001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.287007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.296585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.296603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.296610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.306608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.306628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.306635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.314388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.314405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.314411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.323922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.323939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.323945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.332726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.332743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.332750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.342772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.342789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.342795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.351932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.351948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.351954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.359158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.359175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.359181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.370725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.370742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.370749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.380061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.380079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.380085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.389628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.389645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.389652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.398358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.398374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.398382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.406753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.406770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.406777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.415936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.415952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.415959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.424255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.424273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.424279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.433109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.433126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.433133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.442293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.442310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.442317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.963 [2024-11-15 11:09:21.450315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.963 [2024-11-15 11:09:21.450333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.963 [2024-11-15 11:09:21.450339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.964 [2024-11-15 11:09:21.460955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.964 [2024-11-15 11:09:21.460976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.964 [2024-11-15 11:09:21.460983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.964 [2024-11-15 11:09:21.472768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.964 [2024-11-15 11:09:21.472785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.964 [2024-11-15 11:09:21.472791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.964 [2024-11-15 11:09:21.480363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:01.964 [2024-11-15 11:09:21.480381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.964 [2024-11-15 11:09:21.480387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.490958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.490975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.490982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.501622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.501639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.501645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.510296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.510312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.510319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.518806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.518823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.518829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.528035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.528053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.528060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.537710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.537727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.537733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.545603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.545621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.545627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.557103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.557120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.557126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.565945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.565964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.565970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.575486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.575504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.584338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.584355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.584362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.594427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.594444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.594450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.602061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.602079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.602085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.613325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.613342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.613348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.623580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.623597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.623607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.631757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.631774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.631780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.641703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.641720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.641726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.651470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.651487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.651493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.663119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.663136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.663142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.671576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.671593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.671599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.681462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.681479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.690658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.690674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.690680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.699065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.699081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.699088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.707121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.707141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.226 [2024-11-15 11:09:21.707148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.226 [2024-11-15 11:09:21.716224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.226 [2024-11-15 11:09:21.716241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.227 [2024-11-15 11:09:21.716247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.227 [2024-11-15 11:09:21.725448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.227 [2024-11-15 11:09:21.725465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.227 [2024-11-15 11:09:21.725471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.227 [2024-11-15 11:09:21.735054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.227 [2024-11-15 11:09:21.735071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.227 [2024-11-15 11:09:21.735077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.227 [2024-11-15 11:09:21.744100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.227 [2024-11-15 11:09:21.744116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.227 [2024-11-15 11:09:21.744122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.753155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.753172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.753178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.762530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.762546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.762552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.770766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.770783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.770789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.782596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.782613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.782620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.791306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.791322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.791329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.799664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.799681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.799687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.808388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.808405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.808411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.816771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.816788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.816795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.826631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.826647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.826654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.835721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.835737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.835743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 [2024-11-15 11:09:21.844099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84f040) 00:29:02.488 [2024-11-15 11:09:21.844115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.488 [2024-11-15 11:09:21.844122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.488 27582.00 IOPS, 107.74 MiB/s 00:29:02.488 Latency(us) 00:29:02.488 [2024-11-15T10:09:22.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.488 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:02.488 nvme0n1 : 2.00 27592.77 107.78 0.00 0.00 4633.77 2280.11 16820.91 00:29:02.488 [2024-11-15T10:09:22.015Z] =================================================================================================================== 00:29:02.488 [2024-11-15T10:09:22.015Z] Total : 27592.77 107.78 0.00 0.00 4633.77 2280.11 16820.91 00:29:02.488 { 00:29:02.488 "results": [ 00:29:02.488 { 00:29:02.488 "job": "nvme0n1", 00:29:02.488 "core_mask": "0x2", 00:29:02.488 "workload": "randread", 00:29:02.488 "status": "finished", 00:29:02.488 "queue_depth": 128, 00:29:02.488 "io_size": 4096, 00:29:02.488 "runtime": 2.003858, 00:29:02.488 "iops": 27592.773539841644, 00:29:02.488 "mibps": 107.78427164000642, 00:29:02.488 "io_failed": 0, 00:29:02.488 "io_timeout": 0, 00:29:02.488 "avg_latency_us": 4633.768777641129, 00:29:02.488 "min_latency_us": 2280.1066666666666, 00:29:02.488 "max_latency_us": 16820.906666666666 00:29:02.488 } 00:29:02.488 ], 00:29:02.488 "core_count": 1 00:29:02.488 } 00:29:02.488 11:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:02.488 11:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:02.488 11:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:02.488 | .driver_specific 00:29:02.488 | .nvme_error 00:29:02.488 | .status_code 00:29:02.488 | .command_transient_transport_error' 00:29:02.488 11:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 567287 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 567287 ']' 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 567287 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567287 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567287' 00:29:02.749 killing process with pid 567287 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 567287 00:29:02.749 Received shutdown signal, test time was about 2.000000 seconds 00:29:02.749 00:29:02.749 Latency(us) 00:29:02.749 [2024-11-15T10:09:22.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.749 [2024-11-15T10:09:22.276Z] =================================================================================================================== 00:29:02.749 [2024-11-15T10:09:22.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 567287 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=567973 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 567973 /var/tmp/bperf.sock 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 567973 ']' 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:02.749 11:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.010 [2024-11-15 11:09:22.292303] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:03.010 [2024-11-15 11:09:22.292358] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567973 ] 00:29:03.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.010 Zero copy mechanism will not be used. 00:29:03.010 [2024-11-15 11:09:22.376080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.010 [2024-11-15 11:09:22.404949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.582 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:03.582 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:03.582 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.582 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.843 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:03.843 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.843 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.843 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.843 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.843 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.413 nvme0n1 00:29:04.413 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:04.413 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.413 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.413 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.413 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:04.413 11:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.413 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.413 Zero copy mechanism will not be used. 00:29:04.413 Running I/O for 2 seconds... 00:29:04.413 [2024-11-15 11:09:23.792131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.792165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.792178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.799493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.799517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.799524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.804091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.804119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.808590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.808608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.808614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.816816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.816835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.816842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.825323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.825341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.837273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.837292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.837298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.848542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.413 [2024-11-15 11:09:23.848560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.413 [2024-11-15 11:09:23.848572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.413 [2024-11-15 11:09:23.860424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.860442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.860448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.870257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.870275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.870282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.876002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.876021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.876027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.880511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.880530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.880536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.885536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.885554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.885561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.890053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.890071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.890077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.894677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.894695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.905042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.905060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.905067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.912243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.912261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.912267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.919912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.919930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.919940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.928792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.928810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.928816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.414 [2024-11-15 11:09:23.938647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.414 [2024-11-15 11:09:23.938665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.414 [2024-11-15 11:09:23.938671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.946356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.946375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.946381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.953782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.953800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.953806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.959930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.959948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.959954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.968672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.968691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.968697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.977469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.977488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.977494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.982183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.982201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.982207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.985458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.985479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.985485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.989903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.989921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.989928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:23.997827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:23.997846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:23.997852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.004872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.004890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.004897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.014874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.014892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.014898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.021796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.021814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.021820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.027177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.027196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.027202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.036324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.036342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.036348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.044858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.044876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.044883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.055175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.055194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.055200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.063775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.063793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.063800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.676 [2024-11-15 11:09:24.074966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.676 [2024-11-15 11:09:24.074985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.676 [2024-11-15 11:09:24.074991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.086649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.086666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.086673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.095529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.095547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.095553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.104611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.104628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.104634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.113686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.113705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.121887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.121906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.121912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.130264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.130282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.130292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.137655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.137674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.137680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.147978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.147996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.148002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.159215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.159234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.159240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.168021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.168039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.168045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.178062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.178080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.178086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.187524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.187543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.187549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.677 [2024-11-15 11:09:24.197391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.677 [2024-11-15 11:09:24.197410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.677 [2024-11-15 11:09:24.197416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.204232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.204251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.204257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.213663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.213685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.213691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.221602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.221620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.221626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.232513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.232531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.232537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.242072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.242091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.242097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.252235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.252253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.252259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.259191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.938 [2024-11-15 11:09:24.259210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.938 [2024-11-15 11:09:24.259216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.938 [2024-11-15 11:09:24.266843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.266862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.266868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.272683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.272701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.272707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.277612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.277629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.277636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.285425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.285443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.285449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.292880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.292898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.292904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.302460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.302478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.302484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.307724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.307742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.307748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.313330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.313347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.313353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.321313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.321331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.321337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.330996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.331015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.331021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.340836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.340855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.340861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.351982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.352000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.352010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.362971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.362989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.362995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.374731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.374750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.387001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.387019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.387026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.399295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.399314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.399320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.411600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.411618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.411624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.423849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.423868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.423874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.435385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.435404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.435410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.447553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.447577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.447583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.939 [2024-11-15 11:09:24.457660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:04.939 [2024-11-15 11:09:24.457679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.939 [2024-11-15 11:09:24.457685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.466435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.466454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.466461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.477715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.477733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.477739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.485109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.485127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.485134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.492746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.492764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.492770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.500419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.500438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.500445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.510270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.510289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.510295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.520950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.520969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.520975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.531325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.201 [2024-11-15 11:09:24.531343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.201 [2024-11-15 11:09:24.531353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.201 [2024-11-15 11:09:24.542063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.542082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.542088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.553922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.553940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.553947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.565541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.565560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.565571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.575291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.575309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.575315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.585751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.585770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.585776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.595751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.595770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.595776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.602809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.602826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.602832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.609382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.609400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.609407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.615462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.615483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.615489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.622620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.622645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.630956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.630974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.630981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.636497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.636515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.636522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.643962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.643981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.643988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.649809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.649828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.649834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.658374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.658392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.658399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.666580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.666599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.666605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.671804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.671823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.671829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.676589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.676607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.676613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.679666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.679683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.679689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.685334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.685353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.685359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.692635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.692653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.692660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.701087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.701105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.701111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.711087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.711105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.711111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.721030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.202 [2024-11-15 11:09:24.721049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.202 [2024-11-15 11:09:24.721055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.202 [2024-11-15 11:09:24.726503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.203 [2024-11-15 11:09:24.726522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.203 [2024-11-15 11:09:24.726528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.732896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.732914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.732924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.742560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.742585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.742591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.751205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.751223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.751230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.759687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.759704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.759710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.767718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.767743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.774354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.774372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.774378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.782160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.782178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.782185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.464 3665.00 IOPS, 458.12 MiB/s [2024-11-15T10:09:24.991Z] [2024-11-15 11:09:24.791936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.791954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.791961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.801083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.801101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.801107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.808309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.816990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.817008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.817015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.828273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.828292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.828298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.837201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.837220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.837226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.845477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.845497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.845505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.850608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.850633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.856529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.464 [2024-11-15 11:09:24.856547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.464 [2024-11-15 11:09:24.856553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.464 [2024-11-15 11:09:24.864389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.864407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.864414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.870508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.870528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.870534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.877067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.877085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.877092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.883842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.883861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.883867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.894233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.894252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.894258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.902410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.902429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.902435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.911739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.911757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.911763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.919471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.919490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.919496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.927241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.927260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.927266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.936825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.936843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.936849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.941352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.941369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.941383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.947863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.947881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.947888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.952925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.952943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.952949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.960687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.960705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.960712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.969165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.969184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.969190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.977613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.977632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.977639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.465 [2024-11-15 11:09:24.986411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.465 [2024-11-15 11:09:24.986430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.465 [2024-11-15 11:09:24.986436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:24.992798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:24.992817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:24.992823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:24.998019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:24.998037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:24.998043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.003372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.003391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:25.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.009056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.009074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:25.009080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.017472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.017491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:25.017497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.024329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.024347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:25.024353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.031666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.031685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:25.031691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.042418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.042437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.727 [2024-11-15 11:09:25.042444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.727 [2024-11-15 11:09:25.052892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.727 [2024-11-15 11:09:25.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.052917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.064992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.065011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.065017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.074195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.074214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.074223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.083489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.083508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.083514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.092398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.092417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.092423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.098302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.098320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.098326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.107439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.107457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.107464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.116255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.116273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.116279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.125206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.125224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.125230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.136304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.136322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.136328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.146007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.146025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.146031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.153567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.153588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.153594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.163524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.163542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.163548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.172056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.172074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.172080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.180728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.180746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.180752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.190430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.190447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.190453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.199747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.199764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.199771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.207159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.207177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.207183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.215582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.215600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.215606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.224228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.224245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.224252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.229062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.229080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.229087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.236802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.236820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.236826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.728 [2024-11-15 11:09:25.246933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.728 [2024-11-15 11:09:25.246951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.728 [2024-11-15 11:09:25.246957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.258526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.258544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.258551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.268791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.268809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.268815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.277172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.277191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.277198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.282431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.282450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.282456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.288898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.288916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.288923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.298022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.298040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.298050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.305822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.305841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.305847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.314258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.314276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.314283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.320531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.320550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.320556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.327338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.327356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.327363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.336429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.336447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.336453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.346317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.346335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.346341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.357070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.357089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.357095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.366473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.366498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.375163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.375185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.375191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.384281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.384299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.384305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.390235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.390253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.390260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.397798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.397816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.397822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.407586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.407604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.407610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.415308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.415326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.415332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.426953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.426971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.990 [2024-11-15 11:09:25.426977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.990 [2024-11-15 11:09:25.434739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.990 [2024-11-15 11:09:25.434757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.434763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.444799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.444817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.444823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.454690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.454708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.454714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.465666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.465684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.465690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.478103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.478122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.478128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.489520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.489544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.499573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.499591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.499597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.991 [2024-11-15 11:09:25.511160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:05.991 [2024-11-15 11:09:25.511178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.991 [2024-11-15 11:09:25.511184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.521538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.521555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.521567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.531939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.531958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.531964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.541119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.541137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.541146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.550056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.550073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.550079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.555132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.555155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.565804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.565822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.565828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.575532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.575549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.575555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.585513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.585530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.585537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.595530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.595548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.595554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.606185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.606202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.617014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.617030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.617037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.627648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.627668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.627675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.635180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.635198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.252 [2024-11-15 11:09:25.635205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.252 [2024-11-15 11:09:25.641954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.252 [2024-11-15 11:09:25.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.641979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.649938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.649957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.649963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.657288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.657307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.657314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.667469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.667487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.667493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.676182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.676200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.676207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.687353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.687371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.687377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.698993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.699011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.699017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.709961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.709979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.709985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.719546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.719569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.719575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.726945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.726963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.726970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.731742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.731760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.731766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.738253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.738271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.738277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.747942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.747960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.747966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.756108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.756127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.756133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.762397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.762416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.762422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.769548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.769573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.769582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.253 [2024-11-15 11:09:25.778004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.253 [2024-11-15 11:09:25.778022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.253 [2024-11-15 11:09:25.778028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.514 [2024-11-15 11:09:25.785696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe31870) 00:29:06.514 [2024-11-15 11:09:25.785719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.514 [2024-11-15 11:09:25.785725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.514 3647.00 IOPS, 455.88 MiB/s 00:29:06.514 Latency(us) 00:29:06.514 [2024-11-15T10:09:26.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:06.514 nvme0n1 : 2.00 3648.96 456.12 0.00 0.00 4381.91 488.11 15510.19 00:29:06.514 [2024-11-15T10:09:26.041Z] =================================================================================================================== 00:29:06.514 [2024-11-15T10:09:26.041Z] Total : 3648.96 456.12 0.00 0.00 4381.91 488.11 15510.19 00:29:06.514 { 00:29:06.514 "results": [ 00:29:06.514 { 00:29:06.514 "job": "nvme0n1", 00:29:06.514 "core_mask": "0x2", 00:29:06.514 "workload": "randread", 00:29:06.514 "status": "finished", 00:29:06.514 "queue_depth": 16, 00:29:06.514 "io_size": 131072, 00:29:06.514 "runtime": 2.003582, 00:29:06.514 "iops": 3648.9647042147512, 00:29:06.514 "mibps": 456.1205880268439, 00:29:06.514 "io_failed": 0, 00:29:06.514 "io_timeout": 0, 00:29:06.514 "avg_latency_us": 4381.910436328819, 00:29:06.514 "min_latency_us": 488.1066666666667, 00:29:06.514 "max_latency_us": 15510.186666666666 00:29:06.514 } 00:29:06.514 ], 00:29:06.514 "core_count": 1 00:29:06.514 } 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:06.514 | .driver_specific 00:29:06.514 | .nvme_error 00:29:06.514 | .status_code 00:29:06.514 | .command_transient_transport_error' 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 567973 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 567973 ']' 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 567973 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:06.514 11:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567973 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567973' 00:29:06.775 killing process with pid 567973 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 567973 00:29:06.775 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.775 00:29:06.775 Latency(us) 00:29:06.775 [2024-11-15T10:09:26.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.775 [2024-11-15T10:09:26.302Z] =================================================================================================================== 00:29:06.775 [2024-11-15T10:09:26.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 567973 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=568672 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 568672 /var/tmp/bperf.sock 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 568672 ']' 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:06.775 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.775 [2024-11-15 11:09:26.210414] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:06.775 [2024-11-15 11:09:26.210469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568672 ] 00:29:06.775 [2024-11-15 11:09:26.292695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.037 [2024-11-15 11:09:26.322045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.608 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:07.608 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:07.608 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.608 11:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.869 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:07.869 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.869 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.869 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.869 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.869 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.132 nvme0n1 00:29:08.132 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:08.132 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.132 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.132 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.132 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:08.132 11:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.132 Running I/O for 2 seconds... 00:29:08.132 [2024-11-15 11:09:27.567695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.568054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.568079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.576525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.576785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.576803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.585420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.585679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.594194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.594421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.594437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.603004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.603261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.603278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.611748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.611892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.611915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.620513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.620791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.620808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.629290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.629561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.629582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.638004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.638214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.638230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.646758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.647017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.647032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.132 [2024-11-15 11:09:27.655495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.132 [2024-11-15 11:09:27.655759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.132 [2024-11-15 11:09:27.655775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.664245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.664513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.664529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.673061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.673321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.673338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.681819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.682077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.682091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.690532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.690824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.690840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.699250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.699505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.699520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.707976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.708228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.708244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.716719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.716959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.716974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.725494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.725791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.725807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.734187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.734460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.734476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.742897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.743190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.751625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.751877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.760373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.760633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.760648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.769097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.769410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.769426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.777837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.778094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.778110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.786622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.786886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.786902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.795340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.795589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.795605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.804045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.804327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.804343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.812816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.813021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.813035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.821516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.821793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.830263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.830522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.830538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.839030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.839268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.839287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.847737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.847986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.848002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.395 [2024-11-15 11:09:27.856443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.395 [2024-11-15 11:09:27.856724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.395 [2024-11-15 11:09:27.856740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.865219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.865444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.865460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.873951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.874214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.874230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.882663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.882919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.882934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.891360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.891661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.891678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.900056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.900344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.900361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.908900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.909182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.909198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.396 [2024-11-15 11:09:27.917670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.396 [2024-11-15 11:09:27.917965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.396 [2024-11-15 11:09:27.917981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.926431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.926705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.926721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.935184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.935421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.943937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.944181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.944196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.952628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.952883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.952898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.961362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.961611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.961626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.970080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.970339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.970354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.978785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.979049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.979065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.987587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.987852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.987868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:27.996315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:27.996587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:27.996603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.005114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.005330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.005345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.013859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.014132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.014148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.022645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.022934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.022951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.031320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.031601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.031616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.040042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.040300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.040321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.048805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.049069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.049085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.057591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.057793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.057808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.066330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.066579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.066597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.075027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.075249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.658 [2024-11-15 11:09:28.075264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.658 [2024-11-15 11:09:28.083772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.658 [2024-11-15 11:09:28.083891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.083906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.092554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.092794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.092809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.101277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.101535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.101550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.109978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.110218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.110233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.118767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.119021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.119044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.127493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.127759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.127775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.136196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.136467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.136483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.144942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.145202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.145216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.153727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.154022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.154038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.162427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.162685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.162700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.171180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.171448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.171464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.659 [2024-11-15 11:09:28.180049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.659 [2024-11-15 11:09:28.180338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.659 [2024-11-15 11:09:28.180355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.188825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.189096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.189112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.197555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.197696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.197711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.206322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.206565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.206581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.215056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.215355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.215372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.223826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.224129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.224145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.232624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.232870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.232885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.241414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.241677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.250136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.250410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.250426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.258894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.259189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.259205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.267656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.267904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.267920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.276385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.276648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.276664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.285103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.285360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.285376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.293842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.294082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.920 [2024-11-15 11:09:28.294101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.920 [2024-11-15 11:09:28.302590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.920 [2024-11-15 11:09:28.302848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.302863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.311342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.311632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.311648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.320135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.320399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.328861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.329124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.329139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.337546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.337695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.337710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.346292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.346556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.346575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.355002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.355272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.355296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.363746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.364011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.364027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.372500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.372739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.372755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.381261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.381521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.381536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.390020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.390303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.390319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.398711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.399006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.399022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.407436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.407694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.407710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.416175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.416313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.416328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.424903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.425191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.425206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.433710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.433827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.433842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.921 [2024-11-15 11:09:28.442424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:08.921 [2024-11-15 11:09:28.442680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.921 [2024-11-15 11:09:28.442695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.451149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.451461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.451477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.459858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.460121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.460136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.468592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.468890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.468906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.477307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.477570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.477585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.486070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.486354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.486369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.494766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.495059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.495075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.503599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.503852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.503873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.512330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.182 [2024-11-15 11:09:28.512606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.182 [2024-11-15 11:09:28.512621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.182 [2024-11-15 11:09:28.521054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.521272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.521290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.529795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.530045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.530060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.538500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.538824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.538839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.547291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.547550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.547568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.556029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.557526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.557543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 29039.00 IOPS, 113.43 MiB/s [2024-11-15T10:09:28.710Z] [2024-11-15 11:09:28.564821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.565075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.565091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.573557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.573846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.573862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.582291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.582524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.582539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.591009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.591239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.591254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.599705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.599959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.599981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.608482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.608745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.608761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.617187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.617477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.617493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.625938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.626188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.626203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.634640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.634884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.634900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.643376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.643644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.643660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.652083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.652345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.652360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.660800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.661079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.661094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.669573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.669854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.669870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.678251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.678530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.678545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.686980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.687216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.687231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.695774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.696021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.696037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.183 [2024-11-15 11:09:28.704519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.183 [2024-11-15 11:09:28.704798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.183 [2024-11-15 11:09:28.704814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.713303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.713554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.713573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.722049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.722266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.722281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.730767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.731020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.731035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.739502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.739767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.739789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.748211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.748463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.748481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.756913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.757168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.757185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.765660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.765869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.765884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.774404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.774548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.774568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.783120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.783377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.445 [2024-11-15 11:09:28.783393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.445 [2024-11-15 11:09:28.791876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.445 [2024-11-15 11:09:28.792155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.792170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.800629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.800939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.800954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.809377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.809646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.809662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.818096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.818355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.818371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.826813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.827080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.827095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.835531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.835767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.835782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.844227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.844340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.844355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.852946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.853204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.853219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.861638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.861956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.861972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.870329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.870654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.879050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.879323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.879339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.887793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.888059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.888075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.896506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.896823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.905248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.905484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.905499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.914018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.914263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.914286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.922712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.922981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.923002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.931466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.931747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.931763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.940245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.940499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.940514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.948984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.949237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.949252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.957674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.957955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.957971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.446 [2024-11-15 11:09:28.966370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.446 [2024-11-15 11:09:28.966593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.446 [2024-11-15 11:09:28.966609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:28.975084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:28.975319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:28.975337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:28.983802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:28.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:28.984097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:28.992590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:28.992856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:28.992872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.001325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.001580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.001594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.009986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.010271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.010287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.018673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.018966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.018982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.027408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.027676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.027692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.036169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.036434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.036449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.044903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.045163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.045180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.053642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.053892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.053906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.062348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.062612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.062627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.071110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.071377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.071394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.079857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.080136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.080152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.088617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.088879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.088894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.097355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.097616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.106148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.106398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.106413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.114815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.115044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.115059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.123590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.123854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.123869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.132325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.132542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.141066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.141266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.141282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.149749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.149909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.149924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.158448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.158707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.158722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.167196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.167447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.167462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.176104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.176392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.176409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.184802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.185047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.709 [2024-11-15 11:09:29.193632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.709 [2024-11-15 11:09:29.193881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.709 [2024-11-15 11:09:29.193905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.710 [2024-11-15 11:09:29.202322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.710 [2024-11-15 11:09:29.202586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.710 [2024-11-15 11:09:29.202601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.710 [2024-11-15 11:09:29.211093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.710 [2024-11-15 11:09:29.211343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.710 [2024-11-15 11:09:29.211358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.710 [2024-11-15 11:09:29.219783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.710 [2024-11-15 11:09:29.220050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.710 [2024-11-15 11:09:29.220066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.710 [2024-11-15 11:09:29.228566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.710 [2024-11-15 11:09:29.228805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.710 [2024-11-15 11:09:29.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.237295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.237533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.237549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.246002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.246224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.246239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.254700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.254939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.254954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.263448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.263722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.263738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.272177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.272429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.272445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.280933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.281192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.281211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.289646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.289911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.289933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.298441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.298679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.298694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.307150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.307426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.307442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.315906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.316151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.316167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.324685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.324910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.324925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.333421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.333692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.333708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.342147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.342406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.342421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.350876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.351149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.351165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.359630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.359891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.359907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.368440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.368700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.368716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.377120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.377376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.377392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.385849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.385998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.394566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.394684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.394699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.403292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.403542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.403557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.411998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.412252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.412267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.420756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.971 [2024-11-15 11:09:29.421021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.971 [2024-11-15 11:09:29.421037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.971 [2024-11-15 11:09:29.429459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.429708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.438211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.438453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.438474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.446951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.447214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.447230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.455676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.455983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.455998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.464391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.464657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.464672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.473079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.473343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.473358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.481805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.482088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.972 [2024-11-15 11:09:29.490621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:09.972 [2024-11-15 11:09:29.490900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.972 [2024-11-15 11:09:29.490915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.499371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.499579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.508063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.508312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.508330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.516742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.517052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.517068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.525514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.525850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.534298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.534591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.534606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.543042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.543290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.543306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 [2024-11-15 11:09:29.551769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.552069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.552085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 29150.00 IOPS, 113.87 MiB/s [2024-11-15T10:09:29.760Z] [2024-11-15 11:09:29.560472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f520) with pdu=0x200016ef3e60 00:29:10.233 [2024-11-15 11:09:29.560777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.233 [2024-11-15 11:09:29.560792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.233 00:29:10.233 Latency(us) 00:29:10.233 [2024-11-15T10:09:29.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.234 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.234 nvme0n1 : 2.01 29154.09 113.88 0.00 0.00 4383.17 2471.25 12506.45 00:29:10.234 [2024-11-15T10:09:29.761Z] =================================================================================================================== 00:29:10.234 [2024-11-15T10:09:29.761Z] Total : 29154.09 113.88 0.00 0.00 4383.17 2471.25 12506.45 00:29:10.234 { 00:29:10.234 "results": [ 00:29:10.234 { 00:29:10.234 "job": "nvme0n1", 00:29:10.234 "core_mask": "0x2", 00:29:10.234 "workload": "randwrite", 00:29:10.234 "status": "finished", 00:29:10.234 "queue_depth": 128, 00:29:10.234 "io_size": 4096, 00:29:10.234 "runtime": 2.005482, 00:29:10.234 "iops": 29154.088643029456, 00:29:10.234 "mibps": 113.88315876183381, 00:29:10.234 "io_failed": 0, 00:29:10.234 "io_timeout": 0, 00:29:10.234 "avg_latency_us": 4383.172324462384, 00:29:10.234 "min_latency_us": 2471.2533333333336, 00:29:10.234 "max_latency_us": 12506.453333333333 00:29:10.234 } 00:29:10.234 ], 00:29:10.234 "core_count": 1 00:29:10.234 } 00:29:10.234 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:10.234 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:10.234 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:10.234 | .driver_specific 00:29:10.234 | .nvme_error 00:29:10.234 | .status_code 00:29:10.234 | .command_transient_transport_error' 00:29:10.234 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 229 > 0 )) 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 568672 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 568672 ']' 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 568672 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 568672 00:29:10.494 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 568672' 00:29:10.495 killing process with pid 568672 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 568672 00:29:10.495 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.495 00:29:10.495 Latency(us) 00:29:10.495 [2024-11-15T10:09:30.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.495 [2024-11-15T10:09:30.022Z] =================================================================================================================== 00:29:10.495 [2024-11-15T10:09:30.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 568672 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=569477 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 569477 /var/tmp/bperf.sock 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 569477 ']' 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:10.495 11:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.495 [2024-11-15 11:09:30.003880] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:10.495 [2024-11-15 11:09:30.003935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569477 ] 00:29:10.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.495 Zero copy mechanism will not be used. 00:29:10.754 [2024-11-15 11:09:30.102778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.754 [2024-11-15 11:09:30.134355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:11.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:11.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:11.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.842 nvme0n1 00:29:11.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.842 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.842 Zero copy mechanism will not be used. 00:29:11.842 Running I/O for 2 seconds... 00:29:11.842 [2024-11-15 11:09:31.295923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.296159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.296183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.302778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.302839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.302858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.306611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.306661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.306678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.313141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.313188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.313204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.317796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.317849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.317864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.322599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.322882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.322899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.330627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.330704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.330720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.335174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.335244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.335260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.339554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.339619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.339635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.346340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.842 [2024-11-15 11:09:31.346401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.842 [2024-11-15 11:09:31.346417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.842 [2024-11-15 11:09:31.355282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.843 [2024-11-15 11:09:31.355344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.843 [2024-11-15 11:09:31.355360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.843 [2024-11-15 11:09:31.362107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.843 [2024-11-15 11:09:31.362334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.843 [2024-11-15 11:09:31.362349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.843 [2024-11-15 11:09:31.369318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:11.843 [2024-11-15 11:09:31.369366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.843 [2024-11-15 11:09:31.369382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.376679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.376758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.376773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.381178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.381250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.381265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.385631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.385696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.385712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.391234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.391309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.391326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.395521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.395747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.395763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.401134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.401213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.401232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.405160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.405233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.405249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.409595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.409643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.409659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.420120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.420180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.420196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.431068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.431316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.431332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.441445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.441697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.103 [2024-11-15 11:09:31.441713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.103 [2024-11-15 11:09:31.451129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.103 [2024-11-15 11:09:31.451405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.451421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.462410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.462643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.462659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.472739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.472994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.473009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.483196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.483486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.494018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.494235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.494251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.505422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.505666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.505682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.514266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.514388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.514404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.517797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.517975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.517991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.521420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.521614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.521630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.525288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.525484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.525500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.533370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.533613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.533629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.542336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.542606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.542629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.552669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.552850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.562533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.562837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.562854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.572759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.573004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.573020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.582884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.583092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.583108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.593291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.593572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.593588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.604066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.604229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.604244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.614680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.615064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.104 [2024-11-15 11:09:31.625610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.104 [2024-11-15 11:09:31.625828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-11-15 11:09:31.625843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.635832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.636099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.636119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.646066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.646348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.646364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.656600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.656909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.656925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.664416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.664719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.664735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.672642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.672959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.672975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.677412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.677595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.677611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.680289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.680458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.680474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.683010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.683134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.683149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.685657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.685785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.685800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.688242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.688395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.688411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.690827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.690975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.690991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.693721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.693875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.693891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.696257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.696408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.696423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.698780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.698918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.698933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.701266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.701412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.704056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.704236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.704251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.707328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.707483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.707498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.366 [2024-11-15 11:09:31.709850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.366 [2024-11-15 11:09:31.710006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.366 [2024-11-15 11:09:31.710021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.712355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.712497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.712512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.714834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.714980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.714995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.717295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.717439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.717455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.719804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.719956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.719971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.722300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.722450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.722465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.724769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.724904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.724920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.727590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.727768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.727784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.731546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.731794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.731809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.741912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.742147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.742166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.752025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.752281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.761825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.762051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.762066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.768166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.768361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.768377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.776475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.776727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.776743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.787052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.787195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.787211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.790837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.791042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.791057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.795316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.795437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.795453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.798264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.798401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.798416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.800948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.801079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.801095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.803574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.803724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.803740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.806142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.806280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.806295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.808704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.808833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.808849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.811653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.811781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.811797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.814674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.814812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.814827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.817145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.817294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.817309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.819642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.819801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.819817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.822136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.367 [2024-11-15 11:09:31.822280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.367 [2024-11-15 11:09:31.822296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.367 [2024-11-15 11:09:31.824628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.824776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.824791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.827628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.827799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.827814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.832745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.832979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.832994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.840211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.840367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.840383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.843530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.843700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.843716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.846928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.847106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.847121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.850351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.850468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.850484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.853667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.853842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.853858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.856455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.856604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.859387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.859491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.859507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.864526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.864649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.864664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.868919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.869047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.869063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.872097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.872228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.872243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.874706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.874838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.874853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.877286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.877416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.877431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.879844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.879969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.879985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.882428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.882567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.882582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.886438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.886590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.368 [2024-11-15 11:09:31.891315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.368 [2024-11-15 11:09:31.891450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-11-15 11:09:31.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.895865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.895987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.896002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.898609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.898729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.898745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.901324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.901454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.901469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.904015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.904127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.904143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.906577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.906695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.906710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.909170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.909291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.909307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.911725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.911850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.911866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.914206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.914323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.914339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.917025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.917151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.917165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.920397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.920517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.920532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.922869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.922979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.922994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.925344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.925467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.925483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.630 [2024-11-15 11:09:31.927855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.630 [2024-11-15 11:09:31.927975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.630 [2024-11-15 11:09:31.927990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.931231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.931378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.931394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.939504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.939769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.939784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.946155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.946353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.946369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.952832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.953107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.953123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.962303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.962470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.962486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.966615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.966770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.966786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.972743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.972893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.972909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.976779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.976944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.976960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.982895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.983048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.988541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.988696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.988712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:31.994236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:31.994473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:31.994488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.000378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.000557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.004328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.004489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.004505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.007712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.007869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.007885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.011264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.011432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.011447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.019382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.019531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.026032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.026150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.030141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.030234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.030250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.033919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.033992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.034008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.037487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.037626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.037641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.041284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.041461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.041476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.044707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.044893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.044908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.048605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.048677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.048692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.052585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.052647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.052663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.056006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.056049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.056064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.059645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.059738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.059753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.063444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.631 [2024-11-15 11:09:32.063510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.631 [2024-11-15 11:09:32.063526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.631 [2024-11-15 11:09:32.068084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.068202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.068218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.077268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.077333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.077348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.080925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.081023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.081039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.084069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.084175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.084191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.087304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.087388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.087404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.090918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.090975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.090990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.094268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.094350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.094365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.099091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.099150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.099165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.107804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.108043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.108058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.116453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.116566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.116582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.119970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.120164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.120183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.123180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.123265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.123281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.126522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.126605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.126621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.133327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.133448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.143458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.143694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.143709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.152261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.152437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.152452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.632 [2024-11-15 11:09:32.156061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.632 [2024-11-15 11:09:32.156164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-11-15 11:09:32.156180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.161984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.162200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.162215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.165944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.166037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.166052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.169536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.169617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.174796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.175017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.175033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.180297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.180371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.180387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.184251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.184341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.184357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.187431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.187516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.894 [2024-11-15 11:09:32.187531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.894 [2024-11-15 11:09:32.190148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.894 [2024-11-15 11:09:32.190242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.190257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.193066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.193159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.193175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.195827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.195924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.195939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.198463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.198566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.198582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.201029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.201126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.201140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.203743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.203836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.203852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.206544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.206654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.206669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.209303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.209371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.209386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.212136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.212255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.212270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.214925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.215036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.215051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.217387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.217492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.217508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.219858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.219957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.219973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.222323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.222429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.222447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.224786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.224888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.227227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.227335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.227351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.229651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.229763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.229778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.232092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.232218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.234516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.234623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.234639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.236957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.237068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.237083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.239408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.239508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.239523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.241840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.241945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.241960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.244247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.244355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.244371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.246669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.246770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.246785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.249097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.249207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.249222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.895 [2024-11-15 11:09:32.251788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.895 [2024-11-15 11:09:32.251904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.895 [2024-11-15 11:09:32.251919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.255009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.255148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.255164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.264921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.265209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.265226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.274110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.274351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.274366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.283874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.284103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.284118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.896 6193.00 IOPS, 774.12 MiB/s [2024-11-15T10:09:32.423Z] [2024-11-15 11:09:32.293618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.293930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.293947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.303983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.304233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.304248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.314228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.314517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.324807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.325036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.325051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.334899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.335145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.335161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.345639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.345842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.345858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.356642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.356912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.356928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.367723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.367976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.367991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.375842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.375900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.375916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.379299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.379342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.379360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.382516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.382588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.385821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.385867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.385882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.388715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.388759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.388774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.391686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.391730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.394822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.394878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.394894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.400374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.400427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.400443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.896 [2024-11-15 11:09:32.405040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.896 [2024-11-15 11:09:32.405104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-11-15 11:09:32.405120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.897 [2024-11-15 11:09:32.408308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.897 [2024-11-15 11:09:32.408362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-11-15 11:09:32.408377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.897 [2024-11-15 11:09:32.411766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.897 [2024-11-15 11:09:32.411819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-11-15 11:09:32.411834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.897 [2024-11-15 11:09:32.416919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.897 [2024-11-15 11:09:32.417011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-11-15 11:09:32.417026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.897 [2024-11-15 11:09:32.420208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:12.897 [2024-11-15 11:09:32.420273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-11-15 11:09:32.420289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.423468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.423528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.423543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.426457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.426500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.426516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.430114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.430195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.430210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.439141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.439392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.439407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.447877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.448109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.448124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.457261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.457490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.457505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.460504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.460556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.460576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.464079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.464122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.464137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.467495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.467567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.467583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.470825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.159 [2024-11-15 11:09:32.470884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-11-15 11:09:32.470899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.159 [2024-11-15 11:09:32.474094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.474136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.474151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.477640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.477687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.477703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.480498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.480566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.480581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.483124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.483168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.483183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.485754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.485809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.485827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.488380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.488428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.488444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.490904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.490955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.490971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.493684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.493745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.493760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.496433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.496479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.496494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.501822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.501889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.501905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.504852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.504895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.504910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.507973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.508024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.508039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.510989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.511056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.511071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.514276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.514326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.514342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.518242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.518306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.525656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.525732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.525747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.533235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.533281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.533296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.536874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.536918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.536934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.541419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.541474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.541490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.545535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.545580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.545596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.549913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.549968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.549983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.554636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.554680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.554695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.558496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.558546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.558565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.562123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.562191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.562206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.568111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.568175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.568190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.160 [2024-11-15 11:09:32.575264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.160 [2024-11-15 11:09:32.575325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-11-15 11:09:32.575340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.584696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.585006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.585022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.595207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.595504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.595521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.604520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.604590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.604605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.609095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.609156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.611801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.611864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.611883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.614487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.614537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.614552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.617134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.617194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.617210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.619790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.619842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.619857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.622540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.622591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.622607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.625270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.625326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.625341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.627753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.627808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.627823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.630705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.630749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.630764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.633232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.633291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.633306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.635695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.635749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.635765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.638411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.638456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.638471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.643643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.643692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.643707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.647542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.647781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.647796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.655732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.655776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.655791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.660059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.660100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.660115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.663125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.663179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.663194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.666320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.666383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.666398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.671394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.671629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.671644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.675745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.675797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-11-15 11:09:32.675812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.161 [2024-11-15 11:09:32.679141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.161 [2024-11-15 11:09:32.679184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.162 [2024-11-15 11:09:32.679199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.162 [2024-11-15 11:09:32.682554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.162 [2024-11-15 11:09:32.682603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.162 [2024-11-15 11:09:32.682619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.686059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.686109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.686124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.689895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.689946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.689961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.696693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.696740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.696756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.700352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.700425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.700441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.704233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.704336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.712217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.712261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.712279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.716494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.716545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.716560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.720636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.720694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.720709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.724772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.724816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-11-15 11:09:32.724831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.423 [2024-11-15 11:09:32.728756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.423 [2024-11-15 11:09:32.728816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.728831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.732319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.732414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.732429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.736096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.736162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.736177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.739817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.739859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.739874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.744849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.744894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.744909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.753318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.753368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.753386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.760099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.760239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.760255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.768199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.768467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.768483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.777504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.777573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.777589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.785095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.785372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.785388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.790775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.790858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.790873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.795907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.795976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.795991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.799404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.799483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.799498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.805767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.805813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.805829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.808941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.808985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.809000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.812409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.812459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.812475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.816578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.816659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.816675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.821292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.821351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.821366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.829570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.829664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.829679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.837806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.838074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.838091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.845685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.845986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.846002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.849248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.849296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.849311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.853160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.853233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.853249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.856664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.856712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.856727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.860243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.860298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.860313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.863484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.863530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.863546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.867715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.867759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.867775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.424 [2024-11-15 11:09:32.872558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.424 [2024-11-15 11:09:32.872636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.424 [2024-11-15 11:09:32.872651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.881863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.881955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.886935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.886977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.886992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.890721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.890795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.890810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.896034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.896097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.896115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.898943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.898996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.899011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.901749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.901792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.901807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.904609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.904673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.904688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.907417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.907480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.907495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.909938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.909989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.910004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.912506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.912551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.912572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.915030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.915078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.915094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.917495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.917540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.917555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.919967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.920015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.920030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.922453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.922515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.922530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.925810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.925890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.925905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.928542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.928609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.928624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.930983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.931024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.931039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.933449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.933498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.933514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.935915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.935973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.935987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.938509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.938574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.938590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.941790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.941844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.941859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.425 [2024-11-15 11:09:32.945620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.425 [2024-11-15 11:09:32.945691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.425 [2024-11-15 11:09:32.945706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.952766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.952826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.952841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.955943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.955987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.956002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.958431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.958484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.958499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.961009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.961053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.961068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.964713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.964870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.964885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.969162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.969206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.969222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.971966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.972040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.972055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.975373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.975431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.975452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.978363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.978409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.978424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.980855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.980902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.980917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.983297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.983346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.983361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.985772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.985831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.689 [2024-11-15 11:09:32.985846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.689 [2024-11-15 11:09:32.988243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.689 [2024-11-15 11:09:32.988298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:32.988313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:32.990811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:32.990857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:32.990873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:32.996965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:32.997017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:32.997033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.004739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.005045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.005061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.012505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.012619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.012635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.018025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.018354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.018370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.025158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.025456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.025471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.031932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.032006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.032020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.038415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.038494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.038509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.041823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.041918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.041933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.048345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.048395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.048410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.056767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.056863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.056878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.059724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.059805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.059820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.062676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.062769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.062784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.065542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.065625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.065640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.068613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.068674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.068689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.071997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.072075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.072091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.074848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.074921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.077520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.077586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.077602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.080302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.080385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.080400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.082951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.083011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.085415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.085494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.085512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.088046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.088141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.088156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.091499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.091631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.091647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.101761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.101849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.101864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.111356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.111632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.111649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.121042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.690 [2024-11-15 11:09:33.121303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.690 [2024-11-15 11:09:33.121319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.690 [2024-11-15 11:09:33.125601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.125699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.125715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.128876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.128971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.128986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.131635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.131731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.131747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.134533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.134638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.134654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.137422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.137519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.137534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.140289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.140378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.140393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.143042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.143129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.143145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.145685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.145796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.145811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.149226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.149314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.149330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.152220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.152353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.152369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.155319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.155412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.155427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.158164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.158256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.158272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.162979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.163072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.163087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.168882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.169143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.169158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.175462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.175549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.179131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.179224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.179240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.183065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.183152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.183167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.187353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.187438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.187454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.190727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.190812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.190828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.195574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.195660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.195676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.199550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.199653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.199671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.203719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.203929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.203944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.209445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.209531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.209547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.691 [2024-11-15 11:09:33.214360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.691 [2024-11-15 11:09:33.214644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.691 [2024-11-15 11:09:33.214659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.219182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.219268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.219284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.223714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.223805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.223821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.227678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.227764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.227780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.230844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.230936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.230951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.234506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.234597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.234613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.239646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.239757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.239773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.246307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.246654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.953 [2024-11-15 11:09:33.246672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.953 [2024-11-15 11:09:33.251181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.953 [2024-11-15 11:09:33.251276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.251292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.255833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.255930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.255945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.265125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.265289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.265304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.274162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.274251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.274266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.278183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.278272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.278287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.281722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.281809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.281824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.285063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.285151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.285166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.287890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.287987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.288003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.954 [2024-11-15 11:09:33.290720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.290806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.290821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.954 6508.00 IOPS, 813.50 MiB/s [2024-11-15T10:09:33.481Z] [2024-11-15 11:09:33.294692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2f860) with pdu=0x200016eff3c8 00:29:13.954 [2024-11-15 11:09:33.294786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.954 [2024-11-15 11:09:33.294800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.954 00:29:13.954 Latency(us) 00:29:13.954 [2024-11-15T10:09:33.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.954 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:13.954 nvme0n1 : 2.00 6509.32 813.66 0.00 0.00 2454.63 989.87 11359.57 00:29:13.954 [2024-11-15T10:09:33.481Z] =================================================================================================================== 00:29:13.954 [2024-11-15T10:09:33.481Z] Total : 6509.32 813.66 0.00 0.00 2454.63 989.87 11359.57 00:29:13.954 { 00:29:13.954 "results": [ 00:29:13.954 { 00:29:13.954 "job": "nvme0n1", 00:29:13.954 "core_mask": "0x2", 00:29:13.954 "workload": "randwrite", 00:29:13.954 "status": "finished", 00:29:13.954 "queue_depth": 16, 00:29:13.954 "io_size": 131072, 00:29:13.954 "runtime": 2.002667, 00:29:13.954 "iops": 6509.31982201734, 00:29:13.954 "mibps": 813.6649777521675, 00:29:13.954 "io_failed": 0, 00:29:13.954 "io_timeout": 0, 00:29:13.954 "avg_latency_us": 2454.6338876956124, 00:29:13.954 "min_latency_us": 989.8666666666667, 00:29:13.954 "max_latency_us": 11359.573333333334 00:29:13.954 } 00:29:13.954 ], 00:29:13.954 "core_count": 1 00:29:13.954 } 00:29:13.954 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.954 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.954 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.954 | .driver_specific 00:29:13.954 | .nvme_error 00:29:13.954 | .status_code 00:29:13.954 | .command_transient_transport_error' 00:29:13.954 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 569477 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 569477 ']' 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 569477 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 569477 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 569477' 00:29:14.215 killing process with pid 569477 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 569477 00:29:14.215 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.215 00:29:14.215 Latency(us) 00:29:14.215 [2024-11-15T10:09:33.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.215 [2024-11-15T10:09:33.742Z] =================================================================================================================== 00:29:14.215 [2024-11-15T10:09:33.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 569477 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 566954 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 566954 ']' 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 566954 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 566954 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 566954' 00:29:14.215 killing process with pid 566954 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 566954 00:29:14.215 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 566954 00:29:14.476 00:29:14.476 real 0m16.426s 00:29:14.476 user 0m32.308s 00:29:14.476 sys 0m3.851s 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.476 ************************************ 00:29:14.476 END TEST nvmf_digest_error 00:29:14.476 ************************************ 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.476 rmmod nvme_tcp 00:29:14.476 rmmod nvme_fabrics 00:29:14.476 rmmod nvme_keyring 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 566954 ']' 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 566954 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 566954 ']' 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 566954 00:29:14.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (566954) - No such process 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 566954 is not found' 00:29:14.476 Process with pid 566954 is not found 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.476 11:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.019 00:29:17.019 real 0m43.252s 00:29:17.019 user 1m7.579s 00:29:17.019 sys 0m13.392s 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:17.019 ************************************ 00:29:17.019 END TEST nvmf_digest 00:29:17.019 ************************************ 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.019 ************************************ 00:29:17.019 START TEST nvmf_bdevperf 00:29:17.019 ************************************ 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:17.019 * Looking for test storage... 00:29:17.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:17.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.019 --rc genhtml_branch_coverage=1 00:29:17.019 --rc genhtml_function_coverage=1 00:29:17.019 --rc genhtml_legend=1 00:29:17.019 --rc geninfo_all_blocks=1 00:29:17.019 --rc geninfo_unexecuted_blocks=1 00:29:17.019 00:29:17.019 ' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:17.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.019 --rc genhtml_branch_coverage=1 00:29:17.019 --rc genhtml_function_coverage=1 00:29:17.019 --rc genhtml_legend=1 00:29:17.019 --rc geninfo_all_blocks=1 00:29:17.019 --rc geninfo_unexecuted_blocks=1 00:29:17.019 00:29:17.019 ' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:17.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.019 --rc genhtml_branch_coverage=1 00:29:17.019 --rc genhtml_function_coverage=1 00:29:17.019 --rc genhtml_legend=1 00:29:17.019 --rc geninfo_all_blocks=1 00:29:17.019 --rc geninfo_unexecuted_blocks=1 00:29:17.019 00:29:17.019 ' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:17.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.019 --rc genhtml_branch_coverage=1 00:29:17.019 --rc genhtml_function_coverage=1 00:29:17.019 --rc genhtml_legend=1 00:29:17.019 --rc geninfo_all_blocks=1 00:29:17.019 --rc geninfo_unexecuted_blocks=1 00:29:17.019 00:29:17.019 ' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.019 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.020 11:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:25.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:25.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:25.162 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.162 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:25.162 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:29:25.163 00:29:25.163 --- 10.0.0.2 ping statistics --- 00:29:25.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.163 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:29:25.163 00:29:25.163 --- 10.0.0.1 ping statistics --- 00:29:25.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.163 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=574365 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 574365 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 574365 ']' 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.163 11:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.163 [2024-11-15 11:09:43.919158] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:25.163 [2024-11-15 11:09:43.919229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.163 [2024-11-15 11:09:44.020375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.163 [2024-11-15 11:09:44.074087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.163 [2024-11-15 11:09:44.074144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.163 [2024-11-15 11:09:44.074152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.163 [2024-11-15 11:09:44.074159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.163 [2024-11-15 11:09:44.074165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.163 [2024-11-15 11:09:44.075983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.163 [2024-11-15 11:09:44.076145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.163 [2024-11-15 11:09:44.076146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.423 [2024-11-15 11:09:44.801168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.423 Malloc0 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.423 [2024-11-15 11:09:44.875813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.423 { 00:29:25.423 "params": { 00:29:25.423 "name": "Nvme$subsystem", 00:29:25.423 "trtype": "$TEST_TRANSPORT", 00:29:25.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.423 "adrfam": "ipv4", 00:29:25.423 "trsvcid": "$NVMF_PORT", 00:29:25.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.423 "hdgst": ${hdgst:-false}, 00:29:25.423 "ddgst": ${ddgst:-false} 00:29:25.423 }, 00:29:25.423 "method": "bdev_nvme_attach_controller" 00:29:25.423 } 00:29:25.423 EOF 00:29:25.423 )") 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:25.423 11:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.423 "params": { 00:29:25.423 "name": "Nvme1", 00:29:25.423 "trtype": "tcp", 00:29:25.423 "traddr": "10.0.0.2", 00:29:25.423 "adrfam": "ipv4", 00:29:25.423 "trsvcid": "4420", 00:29:25.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.423 "hdgst": false, 00:29:25.423 "ddgst": false 00:29:25.423 }, 00:29:25.423 "method": "bdev_nvme_attach_controller" 00:29:25.423 }' 00:29:25.423 [2024-11-15 11:09:44.936380] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:25.423 [2024-11-15 11:09:44.936446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574713 ] 00:29:25.683 [2024-11-15 11:09:45.030085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.683 [2024-11-15 11:09:45.083094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.943 Running I/O for 1 seconds... 00:29:26.885 8490.00 IOPS, 33.16 MiB/s 00:29:26.885 Latency(us) 00:29:26.885 [2024-11-15T10:09:46.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.885 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:26.885 Verification LBA range: start 0x0 length 0x4000 00:29:26.885 Nvme1n1 : 1.01 8549.97 33.40 0.00 0.00 14896.88 3358.72 14308.69 00:29:26.885 [2024-11-15T10:09:46.412Z] =================================================================================================================== 00:29:26.885 [2024-11-15T10:09:46.412Z] Total : 8549.97 33.40 0.00 0.00 14896.88 3358.72 14308.69 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=575002 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.145 { 00:29:27.145 "params": { 00:29:27.145 "name": "Nvme$subsystem", 00:29:27.145 "trtype": "$TEST_TRANSPORT", 00:29:27.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.145 "adrfam": "ipv4", 00:29:27.145 "trsvcid": "$NVMF_PORT", 00:29:27.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.145 "hdgst": ${hdgst:-false}, 00:29:27.145 "ddgst": ${ddgst:-false} 00:29:27.145 }, 00:29:27.145 "method": "bdev_nvme_attach_controller" 00:29:27.145 } 00:29:27.145 EOF 00:29:27.145 )") 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:27.145 11:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.145 "params": { 00:29:27.145 "name": "Nvme1", 00:29:27.145 "trtype": "tcp", 00:29:27.145 "traddr": "10.0.0.2", 00:29:27.145 "adrfam": "ipv4", 00:29:27.145 "trsvcid": "4420", 00:29:27.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.145 "hdgst": false, 00:29:27.145 "ddgst": false 00:29:27.145 }, 00:29:27.145 "method": "bdev_nvme_attach_controller" 00:29:27.145 }' 00:29:27.145 [2024-11-15 11:09:46.483775] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:27.145 [2024-11-15 11:09:46.483832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575002 ] 00:29:27.145 [2024-11-15 11:09:46.570412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.145 [2024-11-15 11:09:46.605758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.405 Running I/O for 15 seconds... 00:29:29.285 10950.00 IOPS, 42.77 MiB/s [2024-11-15T10:09:49.759Z] 11029.50 IOPS, 43.08 MiB/s [2024-11-15T10:09:49.759Z] 11:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 574365 00:29:30.232 11:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:30.232 [2024-11-15 11:09:49.451159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.232 [2024-11-15 11:09:49.451351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.232 [2024-11-15 11:09:49.451358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.233 [2024-11-15 11:09:49.451918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.233 [2024-11-15 11:09:49.451926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.451935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.451942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.451952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.451959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.451968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.451975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.451984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.451991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.234 [2024-11-15 11:09:49.452299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.234 [2024-11-15 11:09:49.452409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.234 [2024-11-15 11:09:49.452417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.235 [2024-11-15 11:09:49.452574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.235 [2024-11-15 11:09:49.452938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.235 [2024-11-15 11:09:49.452946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.452955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.452962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.452972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.452979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.452989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.452996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.236 [2024-11-15 11:09:49.453402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa11b0 is same with the state(6) to be set 00:29:30.236 [2024-11-15 11:09:49.453420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:30.236 [2024-11-15 11:09:49.453426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:30.236 [2024-11-15 11:09:49.453432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106280 len:8 PRP1 0x0 PRP2 0x0 00:29:30.236 [2024-11-15 11:09:49.453440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.236 [2024-11-15 11:09:49.453530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.236 [2024-11-15 11:09:49.453546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.236 [2024-11-15 11:09:49.453554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.237 [2024-11-15 11:09:49.453567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.237 [2024-11-15 11:09:49.453575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.237 [2024-11-15 11:09:49.453583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.237 [2024-11-15 11:09:49.453590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.457179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.457199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.457900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.457919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.457927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.458147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.458367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.458379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.458387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.458396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.237 [2024-11-15 11:09:49.471323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.471992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.472030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.472043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.472288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.472510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.472520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.472528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.472536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.237 [2024-11-15 11:09:49.485234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.485810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.485831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.485839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.486058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.486276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.486285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.486292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.486299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.237 [2024-11-15 11:09:49.499085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.499678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.499719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.499732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.499975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.500197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.500207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.500215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.500222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.237 [2024-11-15 11:09:49.512915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.513585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.513627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.513639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.513883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.514107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.514117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.514125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.514133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.237 [2024-11-15 11:09:49.526831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.527422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.527452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.527677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.527897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.527906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.527913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.527920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.237 [2024-11-15 11:09:49.540611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.237 [2024-11-15 11:09:49.541154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.237 [2024-11-15 11:09:49.541173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.237 [2024-11-15 11:09:49.541181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.237 [2024-11-15 11:09:49.541399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.237 [2024-11-15 11:09:49.541625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.237 [2024-11-15 11:09:49.541635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.237 [2024-11-15 11:09:49.541642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.237 [2024-11-15 11:09:49.541649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.554545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.555215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.555266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.238 [2024-11-15 11:09:49.555278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.238 [2024-11-15 11:09:49.555522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.238 [2024-11-15 11:09:49.555755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.238 [2024-11-15 11:09:49.555765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.238 [2024-11-15 11:09:49.555773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.238 [2024-11-15 11:09:49.555783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.568374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.568985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.238 [2024-11-15 11:09:49.568993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.238 [2024-11-15 11:09:49.569213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.238 [2024-11-15 11:09:49.569432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.238 [2024-11-15 11:09:49.569440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.238 [2024-11-15 11:09:49.569448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.238 [2024-11-15 11:09:49.569455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.582154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.582708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.582729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.238 [2024-11-15 11:09:49.582736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.238 [2024-11-15 11:09:49.582956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.238 [2024-11-15 11:09:49.583174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.238 [2024-11-15 11:09:49.583183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.238 [2024-11-15 11:09:49.583190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.238 [2024-11-15 11:09:49.583197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.596124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.596681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.596702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.238 [2024-11-15 11:09:49.596710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.238 [2024-11-15 11:09:49.596936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.238 [2024-11-15 11:09:49.597154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.238 [2024-11-15 11:09:49.597164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.238 [2024-11-15 11:09:49.597172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.238 [2024-11-15 11:09:49.597179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.610089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.610646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.610683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.238 [2024-11-15 11:09:49.610693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.238 [2024-11-15 11:09:49.610925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.238 [2024-11-15 11:09:49.611148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.238 [2024-11-15 11:09:49.611156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.238 [2024-11-15 11:09:49.611164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.238 [2024-11-15 11:09:49.611171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.623893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.624542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.624611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.238 [2024-11-15 11:09:49.624624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.238 [2024-11-15 11:09:49.624876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.238 [2024-11-15 11:09:49.625102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.238 [2024-11-15 11:09:49.625111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.238 [2024-11-15 11:09:49.625120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.238 [2024-11-15 11:09:49.625128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.238 [2024-11-15 11:09:49.637855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.238 [2024-11-15 11:09:49.638477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.238 [2024-11-15 11:09:49.638505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.638513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.638742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.638964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.638972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.638987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.638994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.651719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.652371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.652433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.652446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.652714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.652943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.652955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.652963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.652972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.665710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.666297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.666327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.666336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.666558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.666803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.666813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.666821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.666828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.679553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.680266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.680327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.680341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.680608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.680835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.680845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.680854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.680863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.693408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.694014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.694044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.694053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.694275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.694495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.694504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.694512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.694519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.707239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.707841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.707867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.707876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.708097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.708316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.708326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.708335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.708343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.721072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.721651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.721676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.721684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.721904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.722124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.722133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.722141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.722149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.734866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.735577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.239 [2024-11-15 11:09:49.735646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.239 [2024-11-15 11:09:49.735659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.239 [2024-11-15 11:09:49.735914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.239 [2024-11-15 11:09:49.736142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.239 [2024-11-15 11:09:49.736152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.239 [2024-11-15 11:09:49.736162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.239 [2024-11-15 11:09:49.736172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.239 [2024-11-15 11:09:49.748713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.239 [2024-11-15 11:09:49.749358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.240 [2024-11-15 11:09:49.749387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.240 [2024-11-15 11:09:49.749397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.240 [2024-11-15 11:09:49.749630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.240 [2024-11-15 11:09:49.749853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.240 [2024-11-15 11:09:49.749862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.240 [2024-11-15 11:09:49.749870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.240 [2024-11-15 11:09:49.749877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.503 [2024-11-15 11:09:49.762660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.503 [2024-11-15 11:09:49.763330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.763393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.763406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.763676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.763911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.763921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.763930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.763940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 9868.33 IOPS, 38.55 MiB/s [2024-11-15T10:09:50.031Z] [2024-11-15 11:09:49.776481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.777226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.777289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.777302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.777585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.777812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.777822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.777830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.777839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.790384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.791030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.791059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.791068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.791290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.791511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.791520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.791528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.791535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.804266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.804940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.805003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.805016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.805271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.805497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.805507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.805516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.805525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.818067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.818681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.818711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.818720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.818942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.819162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.819171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.819187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.819194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.831931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.832503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.832528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.832536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.832765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.832987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.832998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.833005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.833013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.845740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.846405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.846468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.846480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.846746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.846975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.846984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.846993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.847002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.859740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.860451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.860515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.860527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.860793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.861021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.861031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.861039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.861048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.873604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.874191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.874220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.874229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.874451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.874680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.874690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.874698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.874705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.504 [2024-11-15 11:09:49.887454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.504 [2024-11-15 11:09:49.888032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.504 [2024-11-15 11:09:49.888058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.504 [2024-11-15 11:09:49.888067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.504 [2024-11-15 11:09:49.888289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.504 [2024-11-15 11:09:49.888509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.504 [2024-11-15 11:09:49.888519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.504 [2024-11-15 11:09:49.888527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.504 [2024-11-15 11:09:49.888535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.901277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.901811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.901836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.901845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.902067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.902288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.902297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.902305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.902313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.915247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.915914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.916000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.916256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.916483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.916494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.916503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.916512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.929048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.929726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.929788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.929802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.930058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.930284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.930294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.930303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.930311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.943049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.943687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.943717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.943726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.943946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.944166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.944176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.944184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.944192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.956903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.957493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.957518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.957526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.957757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.957987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.957995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.958003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.958011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.970751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.971345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.971404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.971416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.971678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.971906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.971916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.971924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.971932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.984668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.985259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.985287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.985296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.985518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.985749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.985760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.985770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.985778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:49.998525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:49.999011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:49.999042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:49.999051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:49.999274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:49.999496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:49.999506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:49.999521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:49.999529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:50.012403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:50.012882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:50.012910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:50.012919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:50.013141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:50.013361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:50.013371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:50.013379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:50.013386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.505 [2024-11-15 11:09:50.026310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.505 [2024-11-15 11:09:50.026883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.505 [2024-11-15 11:09:50.026907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.505 [2024-11-15 11:09:50.026916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.505 [2024-11-15 11:09:50.027136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.505 [2024-11-15 11:09:50.027356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.505 [2024-11-15 11:09:50.027365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.505 [2024-11-15 11:09:50.027372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.505 [2024-11-15 11:09:50.027380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.040296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.040806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.040829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.040837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.041058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.041278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.041288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.041296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.041304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.054235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.054780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.054840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.054858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.055117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.055346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.055363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.055378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.055393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.068149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.068653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.068680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.068688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.068910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.069129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.069139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.069146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.069154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.082070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.082555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.082582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.082591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.082811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.083030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.083039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.083048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.083056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.096008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.096470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.096492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.096507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.096733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.096953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.096963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.096972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.096979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.109876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.110520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.110588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.110603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.110854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.111079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.111089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.111097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.111105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.123823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.124452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.124478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.769 [2024-11-15 11:09:50.124487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.769 [2024-11-15 11:09:50.124715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.769 [2024-11-15 11:09:50.124937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.769 [2024-11-15 11:09:50.124947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.769 [2024-11-15 11:09:50.124955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.769 [2024-11-15 11:09:50.124962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.769 [2024-11-15 11:09:50.137746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.769 [2024-11-15 11:09:50.138336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-11-15 11:09:50.138359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.770 [2024-11-15 11:09:50.138367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.770 [2024-11-15 11:09:50.138596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.770 [2024-11-15 11:09:50.138825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.770 [2024-11-15 11:09:50.138834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.770 [2024-11-15 11:09:50.138842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.770 [2024-11-15 11:09:50.138849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.770 [2024-11-15 11:09:50.151546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.770 [2024-11-15 11:09:50.152127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-11-15 11:09:50.152152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.770 [2024-11-15 11:09:50.152160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.770 [2024-11-15 11:09:50.152380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.770 [2024-11-15 11:09:50.152608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.770 [2024-11-15 11:09:50.152618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.770 [2024-11-15 11:09:50.152626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.770 [2024-11-15 11:09:50.152634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.770 [2024-11-15 11:09:50.165350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.770 [2024-11-15 11:09:50.166046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-11-15 11:09:50.166110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.770 [2024-11-15 11:09:50.166123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.770 [2024-11-15 11:09:50.166379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.770 [2024-11-15 11:09:50.166620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.770 [2024-11-15 11:09:50.166631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.770 [2024-11-15 11:09:50.166640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.770 [2024-11-15 11:09:50.166649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.770 [2024-11-15 11:09:50.179252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.770 [2024-11-15 11:09:50.179963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-11-15 11:09:50.180026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.770 [2024-11-15 11:09:50.180039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.770 [2024-11-15 11:09:50.180294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.770 [2024-11-15 11:09:50.180522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.770 [2024-11-15 11:09:50.180532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.770 [2024-11-15 11:09:50.180548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.770 [2024-11-15 11:09:50.180558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.770 [2024-11-15 11:09:50.193114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.770 [2024-11-15 11:09:50.193881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-11-15 11:09:50.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.770 [2024-11-15 11:09:50.193956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.770 [2024-11-15 11:09:50.194210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.770 [2024-11-15 11:09:50.194436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.770 [2024-11-15 11:09:50.194447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.770 [2024-11-15 11:09:50.194455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.770 [2024-11-15 11:09:50.194464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.770 [2024-11-15 11:09:50.207051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.770 [2024-11-15 11:09:50.207717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-11-15 11:09:50.207780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.207792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.208047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.208274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.208285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.208295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.208304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.771 [2024-11-15 11:09:50.221044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.771 [2024-11-15 11:09:50.221701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-11-15 11:09:50.221764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.221777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.222032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.222258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.222268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.222276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.222285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.771 [2024-11-15 11:09:50.235022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.771 [2024-11-15 11:09:50.235686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-11-15 11:09:50.235748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.235763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.236019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.236245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.236255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.236263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.236272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.771 [2024-11-15 11:09:50.249012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.771 [2024-11-15 11:09:50.249528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-11-15 11:09:50.249560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.249580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.249804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.250024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.250034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.250042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.250050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.771 [2024-11-15 11:09:50.263031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.771 [2024-11-15 11:09:50.263683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-11-15 11:09:50.263744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.263757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.264012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.264238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.264249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.264257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.264267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.771 [2024-11-15 11:09:50.277014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.771 [2024-11-15 11:09:50.277778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-11-15 11:09:50.277841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.277861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.278115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.278341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.278351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.278360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.278369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.771 [2024-11-15 11:09:50.290915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.771 [2024-11-15 11:09:50.291616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-11-15 11:09:50.291678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:30.771 [2024-11-15 11:09:50.291692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:30.771 [2024-11-15 11:09:50.291947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:30.771 [2024-11-15 11:09:50.292174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.771 [2024-11-15 11:09:50.292183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.771 [2024-11-15 11:09:50.292192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.771 [2024-11-15 11:09:50.292201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.304740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.305341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.305370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.305379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.305612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.305835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.305845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.305853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.305860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.318564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.319261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.319323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.319336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.319604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.319840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.319850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.319858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.319867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.332452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.333144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.333207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.333219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.333474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.333715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.333726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.333735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.333744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.346258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.346996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.347059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.347072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.347327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.347554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.347576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.347585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.347595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.360108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.360897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.360960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.360973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.361228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.361454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.361464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.361479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.361488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.374034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.374704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.374767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.374780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.375035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.375262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.375271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.375280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.375289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.388015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.388677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.388741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.388755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.389011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.034 [2024-11-15 11:09:50.389237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.034 [2024-11-15 11:09:50.389247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.034 [2024-11-15 11:09:50.389255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.034 [2024-11-15 11:09:50.389264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.034 [2024-11-15 11:09:50.402013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.034 [2024-11-15 11:09:50.402665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-11-15 11:09:50.402712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.034 [2024-11-15 11:09:50.402722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.034 [2024-11-15 11:09:50.402964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.403187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.403196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.403205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.403213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.415939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.416630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.416693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.416706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.416961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.417188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.417197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.417205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.417216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.429746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.430481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.430537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.430548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.430746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.430905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.430912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.430919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.430926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.442369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.442981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.443032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.443042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.443223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.443380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.443387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.443393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.443399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.454989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.455616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.455665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.455686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.455866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.456023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.456031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.456036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.456043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.467638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.468301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.468347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.468357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.468534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.468700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.468708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.468715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.468721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.480300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.480914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.480957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.480967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.481142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.481298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.481304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.481310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.481316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.493038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.493522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.493542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.493548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.493707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.493864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.493871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.493876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.493882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.505709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.506104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.506119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.506124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.506275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.506426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.506431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.506436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.506441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.518406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.519010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.519026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.519032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.035 [2024-11-15 11:09:50.519183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.035 [2024-11-15 11:09:50.519334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.035 [2024-11-15 11:09:50.519339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.035 [2024-11-15 11:09:50.519344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.035 [2024-11-15 11:09:50.519350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.035 [2024-11-15 11:09:50.531031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.035 [2024-11-15 11:09:50.531626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-11-15 11:09:50.531661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.035 [2024-11-15 11:09:50.531669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.036 [2024-11-15 11:09:50.531838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.036 [2024-11-15 11:09:50.531992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.036 [2024-11-15 11:09:50.531998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.036 [2024-11-15 11:09:50.532004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.036 [2024-11-15 11:09:50.532014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.036 [2024-11-15 11:09:50.543707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.036 [2024-11-15 11:09:50.544219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-11-15 11:09:50.544234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.036 [2024-11-15 11:09:50.544240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.036 [2024-11-15 11:09:50.544391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.036 [2024-11-15 11:09:50.544541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.036 [2024-11-15 11:09:50.544547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.036 [2024-11-15 11:09:50.544553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.036 [2024-11-15 11:09:50.544557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.036 [2024-11-15 11:09:50.556379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.036 [2024-11-15 11:09:50.556935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-11-15 11:09:50.556967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.036 [2024-11-15 11:09:50.556976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.036 [2024-11-15 11:09:50.557143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.036 [2024-11-15 11:09:50.557297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.036 [2024-11-15 11:09:50.557303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.036 [2024-11-15 11:09:50.557309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.036 [2024-11-15 11:09:50.557315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.298 [2024-11-15 11:09:50.569009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.298 [2024-11-15 11:09:50.569486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-11-15 11:09:50.569501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.569506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.569662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.569820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.569827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.569832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.569837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.581648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.582103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.582115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.582120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.582270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.582419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.582425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.582430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.582435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.594286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.594854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.594884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.594893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.595062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.595216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.595222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.595228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.595234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.606923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.607504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.607534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.607543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.607719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.607873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.607879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.607885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.607890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.619571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.620120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.620150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.620159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.620328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.620482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.620488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.620493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.620499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.632185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.632689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.632719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.632728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.632896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.633049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.633056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.633061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.633066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.644901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.645476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.645505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.645514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.645690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.645844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.645850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.645856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.645862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.657539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.658147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.658177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.658186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.658352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.658505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.658515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.658520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.658525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.670228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.670846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.670876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.670885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.671051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.671205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.671211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.671217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.671222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.682905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.683475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-11-15 11:09:50.683506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.299 [2024-11-15 11:09:50.683514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.299 [2024-11-15 11:09:50.683688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.299 [2024-11-15 11:09:50.683842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.299 [2024-11-15 11:09:50.683848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.299 [2024-11-15 11:09:50.683853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.299 [2024-11-15 11:09:50.683859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.299 [2024-11-15 11:09:50.695547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.299 [2024-11-15 11:09:50.696098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.696129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.696137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.696303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.696457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.696463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.696469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.696478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.708166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.708687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.708717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.708725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.708894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.709048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.709054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.709060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.709066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.720765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.721325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.721355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.721364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.721531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.721692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.721699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.721705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.721711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.733393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.733984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.734014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.734022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.734188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.734342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.734348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.734354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.734359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.746045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.746656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.746686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.746695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.746864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.747018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.747024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.747029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.747035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.758729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.759273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.759302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.759311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.759477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.759637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.759644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.759650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.759655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 7401.25 IOPS, 28.91 MiB/s [2024-11-15T10:09:50.827Z] [2024-11-15 11:09:50.772194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.772672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.772702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.772710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.772879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.773033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.773038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.773044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.773050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.784881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.785376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.785390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.785396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.785550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.785706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.785712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.785717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.785722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.797540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.798025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.798038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.798044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.798193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.798343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.798349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.798354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.798359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.810169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.300 [2024-11-15 11:09:50.810774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-11-15 11:09:50.810804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.300 [2024-11-15 11:09:50.810813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.300 [2024-11-15 11:09:50.810978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.300 [2024-11-15 11:09:50.811132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.300 [2024-11-15 11:09:50.811138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.300 [2024-11-15 11:09:50.811144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.300 [2024-11-15 11:09:50.811149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.300 [2024-11-15 11:09:50.822840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.301 [2024-11-15 11:09:50.823406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-11-15 11:09:50.823436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.301 [2024-11-15 11:09:50.823444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.301 [2024-11-15 11:09:50.823618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.301 [2024-11-15 11:09:50.823772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.301 [2024-11-15 11:09:50.823782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.301 [2024-11-15 11:09:50.823788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.301 [2024-11-15 11:09:50.823793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.563 [2024-11-15 11:09:50.835475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.563 [2024-11-15 11:09:50.836022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-11-15 11:09:50.836053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.563 [2024-11-15 11:09:50.836061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.563 [2024-11-15 11:09:50.836227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.563 [2024-11-15 11:09:50.836381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.563 [2024-11-15 11:09:50.836387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.563 [2024-11-15 11:09:50.836392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.563 [2024-11-15 11:09:50.836398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.563 [2024-11-15 11:09:50.848085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.563 [2024-11-15 11:09:50.848651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-11-15 11:09:50.848682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.563 [2024-11-15 11:09:50.848691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.563 [2024-11-15 11:09:50.848858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.563 [2024-11-15 11:09:50.849011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.563 [2024-11-15 11:09:50.849018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.563 [2024-11-15 11:09:50.849023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.563 [2024-11-15 11:09:50.849029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.563 [2024-11-15 11:09:50.860736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.563 [2024-11-15 11:09:50.861357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-11-15 11:09:50.861387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.563 [2024-11-15 11:09:50.861396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.563 [2024-11-15 11:09:50.861569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.563 [2024-11-15 11:09:50.861724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.563 [2024-11-15 11:09:50.861730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.563 [2024-11-15 11:09:50.861736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.563 [2024-11-15 11:09:50.861745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.563 [2024-11-15 11:09:50.873436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.563 [2024-11-15 11:09:50.874070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-11-15 11:09:50.874100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.563 [2024-11-15 11:09:50.874109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.563 [2024-11-15 11:09:50.874277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.563 [2024-11-15 11:09:50.874431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.563 [2024-11-15 11:09:50.874437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.563 [2024-11-15 11:09:50.874443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.563 [2024-11-15 11:09:50.874448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.563 [2024-11-15 11:09:50.886139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.563 [2024-11-15 11:09:50.886707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-11-15 11:09:50.886737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.563 [2024-11-15 11:09:50.886745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.563 [2024-11-15 11:09:50.886912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.563 [2024-11-15 11:09:50.887065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.563 [2024-11-15 11:09:50.887071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.563 [2024-11-15 11:09:50.887077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.563 [2024-11-15 11:09:50.887083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.563 [2024-11-15 11:09:50.898785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.563 [2024-11-15 11:09:50.899337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-11-15 11:09:50.899367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.563 [2024-11-15 11:09:50.899376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.563 [2024-11-15 11:09:50.899544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.899705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.899712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.899717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.899723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.911413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.911912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.911931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.911936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.912087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.912237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.912245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.912251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.912257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.924104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.924594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.924607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.924613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.924763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.924913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.924919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.924924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.924928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.936752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.937314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.937344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.937352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.937521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.937683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.937690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.937696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.937702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.949387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.949972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.950002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.950011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.950180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.950334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.950340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.950346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.950351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.962048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.962621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.962652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.962661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.962829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.962983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.962990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.962995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.963001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.974698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.975170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.975200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.975209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.975375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.975528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.975535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.975540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.975546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:50.987372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:50.988020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:50.988051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:50.988060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:50.988227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:50.988380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:50.988390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:50.988396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:50.988402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:51.000112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:51.000664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:51.000695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:51.000704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:51.000873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:51.001026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:51.001032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:51.001038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:51.001044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.564 [2024-11-15 11:09:51.012734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.564 [2024-11-15 11:09:51.013283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-11-15 11:09:51.013313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.564 [2024-11-15 11:09:51.013321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.564 [2024-11-15 11:09:51.013489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.564 [2024-11-15 11:09:51.013649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.564 [2024-11-15 11:09:51.013656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.564 [2024-11-15 11:09:51.013662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.564 [2024-11-15 11:09:51.013668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.565 [2024-11-15 11:09:51.025357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.565 [2024-11-15 11:09:51.025921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-11-15 11:09:51.025951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.565 [2024-11-15 11:09:51.025960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.565 [2024-11-15 11:09:51.026127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.565 [2024-11-15 11:09:51.026281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.565 [2024-11-15 11:09:51.026287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.565 [2024-11-15 11:09:51.026292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.565 [2024-11-15 11:09:51.026301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.565 [2024-11-15 11:09:51.038003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.565 [2024-11-15 11:09:51.038552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-11-15 11:09:51.038589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.565 [2024-11-15 11:09:51.038597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.565 [2024-11-15 11:09:51.038766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.565 [2024-11-15 11:09:51.038920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.565 [2024-11-15 11:09:51.038926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.565 [2024-11-15 11:09:51.038931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.565 [2024-11-15 11:09:51.038937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.565 [2024-11-15 11:09:51.050633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.565 [2024-11-15 11:09:51.051202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-11-15 11:09:51.051233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.565 [2024-11-15 11:09:51.051241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.565 [2024-11-15 11:09:51.051410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.565 [2024-11-15 11:09:51.051571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.565 [2024-11-15 11:09:51.051578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.565 [2024-11-15 11:09:51.051584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.565 [2024-11-15 11:09:51.051589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.565 [2024-11-15 11:09:51.063286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.565 [2024-11-15 11:09:51.063850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-11-15 11:09:51.063880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.565 [2024-11-15 11:09:51.063889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.565 [2024-11-15 11:09:51.064054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.565 [2024-11-15 11:09:51.064208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.565 [2024-11-15 11:09:51.064214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.565 [2024-11-15 11:09:51.064220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.565 [2024-11-15 11:09:51.064225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.565 [2024-11-15 11:09:51.075924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.565 [2024-11-15 11:09:51.076515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-11-15 11:09:51.076552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.565 [2024-11-15 11:09:51.076561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.565 [2024-11-15 11:09:51.076737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.565 [2024-11-15 11:09:51.076891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.565 [2024-11-15 11:09:51.076897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.565 [2024-11-15 11:09:51.076902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.565 [2024-11-15 11:09:51.076908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.565 [2024-11-15 11:09:51.088594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.565 [2024-11-15 11:09:51.089151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-11-15 11:09:51.089181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.565 [2024-11-15 11:09:51.089190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.565 [2024-11-15 11:09:51.089356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.565 [2024-11-15 11:09:51.089509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.565 [2024-11-15 11:09:51.089516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.565 [2024-11-15 11:09:51.089521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.565 [2024-11-15 11:09:51.089527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.828 [2024-11-15 11:09:51.101248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.828 [2024-11-15 11:09:51.101845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.828 [2024-11-15 11:09:51.101876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.828 [2024-11-15 11:09:51.101885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.828 [2024-11-15 11:09:51.102051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.828 [2024-11-15 11:09:51.102205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.828 [2024-11-15 11:09:51.102212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.828 [2024-11-15 11:09:51.102218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.828 [2024-11-15 11:09:51.102223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.828 [2024-11-15 11:09:51.113930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.828 [2024-11-15 11:09:51.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.828 [2024-11-15 11:09:51.114415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.828 [2024-11-15 11:09:51.114420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.828 [2024-11-15 11:09:51.114582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.828 [2024-11-15 11:09:51.114734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.828 [2024-11-15 11:09:51.114740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.828 [2024-11-15 11:09:51.114745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.828 [2024-11-15 11:09:51.114749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.828 [2024-11-15 11:09:51.126582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.828 [2024-11-15 11:09:51.127023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.828 [2024-11-15 11:09:51.127036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.828 [2024-11-15 11:09:51.127042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.828 [2024-11-15 11:09:51.127191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.828 [2024-11-15 11:09:51.127341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.828 [2024-11-15 11:09:51.127347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.828 [2024-11-15 11:09:51.127352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.828 [2024-11-15 11:09:51.127357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.828 [2024-11-15 11:09:51.139187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.828 [2024-11-15 11:09:51.139690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.828 [2024-11-15 11:09:51.139720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.828 [2024-11-15 11:09:51.139729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.828 [2024-11-15 11:09:51.139898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.828 [2024-11-15 11:09:51.140051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.828 [2024-11-15 11:09:51.140058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.828 [2024-11-15 11:09:51.140064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.828 [2024-11-15 11:09:51.140069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.828 [2024-11-15 11:09:51.151903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.828 [2024-11-15 11:09:51.152378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.828 [2024-11-15 11:09:51.152393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.828 [2024-11-15 11:09:51.152399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.828 [2024-11-15 11:09:51.152549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.828 [2024-11-15 11:09:51.152704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.828 [2024-11-15 11:09:51.152710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.828 [2024-11-15 11:09:51.152718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.828 [2024-11-15 11:09:51.152723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.828 [2024-11-15 11:09:51.164543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.828 [2024-11-15 11:09:51.165002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.828 [2024-11-15 11:09:51.165016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.165021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.165171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.165321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.165327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.165332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.165336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.177184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.177774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.177805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.177814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.177980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.178133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.178140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.178145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.178151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.189844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.190382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.190412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.190420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.190592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.190746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.190753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.190760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.190766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.202478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.202966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.202997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.203006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.203175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.203328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.203335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.203341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.203347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.215186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.215772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.215802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.215811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.215979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.216133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.216139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.216145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.216150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.227844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.228438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.228468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.228477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.228651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.228805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.228812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.228817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.228823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.240513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.241083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.241117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.241127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.241295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.829 [2024-11-15 11:09:51.241449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.829 [2024-11-15 11:09:51.241456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.829 [2024-11-15 11:09:51.241462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.829 [2024-11-15 11:09:51.241469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.829 [2024-11-15 11:09:51.253160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.829 [2024-11-15 11:09:51.253611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.829 [2024-11-15 11:09:51.253632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.829 [2024-11-15 11:09:51.253638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.829 [2024-11-15 11:09:51.253794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.253945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.253951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.253956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.253961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.265795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.266285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.266299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.266305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.266455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.266610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.266618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.266623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.266629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.278455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.279057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.279088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.279097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.279263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.279420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.279426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.279432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.279438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.291142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.291787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.291817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.291826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.291992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.292145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.292152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.292157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.292162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.303861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.304201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.304216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.304222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.304372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.304522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.304528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.304533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.304537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.316504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.316988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.317001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.317006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.317156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.317306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.317312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.317321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.317326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.329149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.329611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.329642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.329651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.329819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.329972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.329978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.830 [2024-11-15 11:09:51.329984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.830 [2024-11-15 11:09:51.329989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.830 [2024-11-15 11:09:51.341825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.830 [2024-11-15 11:09:51.342404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.830 [2024-11-15 11:09:51.342434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:31.830 [2024-11-15 11:09:51.342443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:31.830 [2024-11-15 11:09:51.342617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:31.830 [2024-11-15 11:09:51.342771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.830 [2024-11-15 11:09:51.342777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.831 [2024-11-15 11:09:51.342783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.831 [2024-11-15 11:09:51.342788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.354474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.354857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.354872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.354878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.355028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.355179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.093 [2024-11-15 11:09:51.355185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.093 [2024-11-15 11:09:51.355189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.093 [2024-11-15 11:09:51.355194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.367169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.367653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.367684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.367692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.367861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.368014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.093 [2024-11-15 11:09:51.368022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.093 [2024-11-15 11:09:51.368028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.093 [2024-11-15 11:09:51.368033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.379880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.380352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.380366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.380372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.380522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.380677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.093 [2024-11-15 11:09:51.380684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.093 [2024-11-15 11:09:51.380688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.093 [2024-11-15 11:09:51.380693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.392515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.393127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.393158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.393166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.393332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.393486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.093 [2024-11-15 11:09:51.393492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.093 [2024-11-15 11:09:51.393497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.093 [2024-11-15 11:09:51.393503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.405206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.405679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.405695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.405704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.405855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.406005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.093 [2024-11-15 11:09:51.406011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.093 [2024-11-15 11:09:51.406016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.093 [2024-11-15 11:09:51.406020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.417851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.418322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.418334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.418340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.418490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.418644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.093 [2024-11-15 11:09:51.418650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.093 [2024-11-15 11:09:51.418655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.093 [2024-11-15 11:09:51.418660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.093 [2024-11-15 11:09:51.430474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.093 [2024-11-15 11:09:51.431018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.093 [2024-11-15 11:09:51.431048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.093 [2024-11-15 11:09:51.431057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.093 [2024-11-15 11:09:51.431223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.093 [2024-11-15 11:09:51.431376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.431382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.431388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.431394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.443091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.443678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.443708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.443717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.443885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.444043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.444049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.444054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.444060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.455756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.456334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.456365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.456374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.456541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.456702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.456710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.456716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.456722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.468414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.468960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.468990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.468999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.469165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.469318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.469324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.469330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.469335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.481059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.481506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.481521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.481527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.481682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.481833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.481839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.481848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.481853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.493682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.494248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.494278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.494287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.494453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.494613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.494620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.494627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.494632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.506325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.506755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.506771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.506777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.506927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.507077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.507082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.507087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.507092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.518993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.519355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.519369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.519374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.519525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.519678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.519685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.519690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.519695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.531658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.094 [2024-11-15 11:09:51.532140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.094 [2024-11-15 11:09:51.532152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.094 [2024-11-15 11:09:51.532158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.094 [2024-11-15 11:09:51.532307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.094 [2024-11-15 11:09:51.532457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.094 [2024-11-15 11:09:51.532463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.094 [2024-11-15 11:09:51.532468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.094 [2024-11-15 11:09:51.532473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.094 [2024-11-15 11:09:51.544293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.095 [2024-11-15 11:09:51.544675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.095 [2024-11-15 11:09:51.544705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.095 [2024-11-15 11:09:51.544714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.095 [2024-11-15 11:09:51.544882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.095 [2024-11-15 11:09:51.545036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.095 [2024-11-15 11:09:51.545042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.095 [2024-11-15 11:09:51.545048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.095 [2024-11-15 11:09:51.545054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.095 [2024-11-15 11:09:51.556894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.095 [2024-11-15 11:09:51.557471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.095 [2024-11-15 11:09:51.557502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.095 [2024-11-15 11:09:51.557510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.095 [2024-11-15 11:09:51.557686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.095 [2024-11-15 11:09:51.557840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.095 [2024-11-15 11:09:51.557846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.095 [2024-11-15 11:09:51.557852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.095 [2024-11-15 11:09:51.557858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.095 [2024-11-15 11:09:51.569548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.095 [2024-11-15 11:09:51.570025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.095 [2024-11-15 11:09:51.570040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.095 [2024-11-15 11:09:51.570049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.095 [2024-11-15 11:09:51.570200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.095 [2024-11-15 11:09:51.570350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.095 [2024-11-15 11:09:51.570355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.095 [2024-11-15 11:09:51.570361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.095 [2024-11-15 11:09:51.570366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.095 [2024-11-15 11:09:51.582210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.095 [2024-11-15 11:09:51.582756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.095 [2024-11-15 11:09:51.582787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.095 [2024-11-15 11:09:51.582795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.095 [2024-11-15 11:09:51.582962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.095 [2024-11-15 11:09:51.583115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.095 [2024-11-15 11:09:51.583121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.095 [2024-11-15 11:09:51.583127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.095 [2024-11-15 11:09:51.583133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.095 [2024-11-15 11:09:51.594829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.095 [2024-11-15 11:09:51.595274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.095 [2024-11-15 11:09:51.595305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.095 [2024-11-15 11:09:51.595313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.095 [2024-11-15 11:09:51.595480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.095 [2024-11-15 11:09:51.595640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.095 [2024-11-15 11:09:51.595647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.095 [2024-11-15 11:09:51.595652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.095 [2024-11-15 11:09:51.595658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.095 [2024-11-15 11:09:51.607500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.095 [2024-11-15 11:09:51.607985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.095 [2024-11-15 11:09:51.608001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.095 [2024-11-15 11:09:51.608006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.095 [2024-11-15 11:09:51.608157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.095 [2024-11-15 11:09:51.608311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.095 [2024-11-15 11:09:51.608317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.095 [2024-11-15 11:09:51.608322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.095 [2024-11-15 11:09:51.608326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.357 [2024-11-15 11:09:51.620158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.357 [2024-11-15 11:09:51.620641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.357 [2024-11-15 11:09:51.620655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.357 [2024-11-15 11:09:51.620660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.357 [2024-11-15 11:09:51.620810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.357 [2024-11-15 11:09:51.620960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.357 [2024-11-15 11:09:51.620965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.357 [2024-11-15 11:09:51.620970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.357 [2024-11-15 11:09:51.620975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.357 [2024-11-15 11:09:51.632795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.357 [2024-11-15 11:09:51.633275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.357 [2024-11-15 11:09:51.633287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.357 [2024-11-15 11:09:51.633292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.357 [2024-11-15 11:09:51.633442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.357 [2024-11-15 11:09:51.633595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.357 [2024-11-15 11:09:51.633603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.357 [2024-11-15 11:09:51.633608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.357 [2024-11-15 11:09:51.633612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.357 [2024-11-15 11:09:51.645431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.357 [2024-11-15 11:09:51.646028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.357 [2024-11-15 11:09:51.646058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.357 [2024-11-15 11:09:51.646067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.357 [2024-11-15 11:09:51.646233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.357 [2024-11-15 11:09:51.646386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.357 [2024-11-15 11:09:51.646393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.357 [2024-11-15 11:09:51.646402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.357 [2024-11-15 11:09:51.646407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.357 [2024-11-15 11:09:51.658107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.357 [2024-11-15 11:09:51.658711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.658742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.658750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.658917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.659071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.659078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.659083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.659089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.670788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.671347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.671377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.671386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.671552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.671711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.671719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.671725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.671730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.683424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.683976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.684006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.684015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.684183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.684337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.684343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.684349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.684355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.696043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.696539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.696553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.696559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.696722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.696872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.696878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.696883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.696888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.708707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.709155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.709168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.709173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.709323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.709473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.709478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.709483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.709488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.721302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.721869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.721900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.721908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.722077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.722230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.722237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.722243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.722248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.733943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.734332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.734348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.734357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.734507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.734669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.734676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.734681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.734686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.746650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.747114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.747127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.747132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.747282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.747432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.747438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.747443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.747448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.759267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.759665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.759695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.759704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.759873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.760026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.760033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.760038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.760044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 5921.00 IOPS, 23.13 MiB/s [2024-11-15T10:09:51.885Z] [2024-11-15 11:09:51.773028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.773593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.773623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.773632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.773806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.773963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.773970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.773976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.773981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.785674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.786148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.786163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.786169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.786319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.786469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.786474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.786480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.786484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.798318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.798889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.798919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.358 [2024-11-15 11:09:51.798927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.358 [2024-11-15 11:09:51.799093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.358 [2024-11-15 11:09:51.799246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.358 [2024-11-15 11:09:51.799252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.358 [2024-11-15 11:09:51.799258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.358 [2024-11-15 11:09:51.799264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.358 [2024-11-15 11:09:51.810958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.358 [2024-11-15 11:09:51.811316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.358 [2024-11-15 11:09:51.811332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.359 [2024-11-15 11:09:51.811337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.359 [2024-11-15 11:09:51.811488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.359 [2024-11-15 11:09:51.811642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.359 [2024-11-15 11:09:51.811648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.359 [2024-11-15 11:09:51.811657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.359 [2024-11-15 11:09:51.811662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.359 [2024-11-15 11:09:51.823631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.359 [2024-11-15 11:09:51.824084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.359 [2024-11-15 11:09:51.824097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.359 [2024-11-15 11:09:51.824103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.359 [2024-11-15 11:09:51.824252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.359 [2024-11-15 11:09:51.824402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.359 [2024-11-15 11:09:51.824409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.359 [2024-11-15 11:09:51.824413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.359 [2024-11-15 11:09:51.824418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.359 [2024-11-15 11:09:51.836244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.359 [2024-11-15 11:09:51.836702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.359 [2024-11-15 11:09:51.836714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.359 [2024-11-15 11:09:51.836720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.359 [2024-11-15 11:09:51.836870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.359 [2024-11-15 11:09:51.837019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.359 [2024-11-15 11:09:51.837025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.359 [2024-11-15 11:09:51.837030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.359 [2024-11-15 11:09:51.837034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.359 [2024-11-15 11:09:51.848845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.359 [2024-11-15 11:09:51.849227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.359 [2024-11-15 11:09:51.849239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.359 [2024-11-15 11:09:51.849244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.359 [2024-11-15 11:09:51.849394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.359 [2024-11-15 11:09:51.849544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.359 [2024-11-15 11:09:51.849549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.359 [2024-11-15 11:09:51.849554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.359 [2024-11-15 11:09:51.849559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.359 [2024-11-15 11:09:51.861514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.359 [2024-11-15 11:09:51.861968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.359 [2024-11-15 11:09:51.861980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.359 [2024-11-15 11:09:51.861985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.359 [2024-11-15 11:09:51.862135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.359 [2024-11-15 11:09:51.862285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.359 [2024-11-15 11:09:51.862290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.359 [2024-11-15 11:09:51.862295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.359 [2024-11-15 11:09:51.862300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.359 [2024-11-15 11:09:51.874124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.359 [2024-11-15 11:09:51.874684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.359 [2024-11-15 11:09:51.874715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.359 [2024-11-15 11:09:51.874724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.359 [2024-11-15 11:09:51.874893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.359 [2024-11-15 11:09:51.875046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.359 [2024-11-15 11:09:51.875053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.359 [2024-11-15 11:09:51.875058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.359 [2024-11-15 11:09:51.875064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.886758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.887125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.887140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.887145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.887296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.887446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.887452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.887457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.887462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.899439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.899874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.899887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.899896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.900046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.900196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.900202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.900207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.900211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.912044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.912519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.912532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.912537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.912691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.912842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.912847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.912852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.912857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.924686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.925134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.925146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.925151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.925300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.925450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.925456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.925460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.925465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.937336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.937887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.937918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.937926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.938092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.938249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.938256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.938261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.938266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.949959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.950443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.950458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.950464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.950618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.950769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.950775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.950780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.950785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.962607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.963165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.963205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.963371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.963524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.963530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.963536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.963542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.975252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.975874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-11-15 11:09:51.975904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.622 [2024-11-15 11:09:51.975913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.622 [2024-11-15 11:09:51.976079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.622 [2024-11-15 11:09:51.976233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.622 [2024-11-15 11:09:51.976239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.622 [2024-11-15 11:09:51.976245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.622 [2024-11-15 11:09:51.976254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.622 [2024-11-15 11:09:51.987940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.622 [2024-11-15 11:09:51.988428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:51.988443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:51.988449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:51.988604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:51.988755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:51.988760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:51.988767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:51.988772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.000601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.001169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.001199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.001208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.001374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.001528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.001534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.001540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.001545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.013236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.013690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.013721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.013729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.013898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.014052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.014058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.014064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.014070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.025916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.026475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.026505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.026513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.026685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.026839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.026846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.026852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.026858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.038545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.039075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.039106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.039114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.039280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.039433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.039439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.039445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.039450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.051150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.051767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.051798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.051807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.051974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.052127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.052134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.052139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.052145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.063836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.064330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.064361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.064370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.064540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.064701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.064708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.064713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.064719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.076552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.077107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.077137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.077146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.077312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.077465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.077472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.077477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.077483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.623 [2024-11-15 11:09:52.089177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.623 [2024-11-15 11:09:52.089671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-11-15 11:09:52.089702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.623 [2024-11-15 11:09:52.089711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.623 [2024-11-15 11:09:52.089879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.623 [2024-11-15 11:09:52.090033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.623 [2024-11-15 11:09:52.090039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.623 [2024-11-15 11:09:52.090044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.623 [2024-11-15 11:09:52.090050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.624 [2024-11-15 11:09:52.101888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.624 [2024-11-15 11:09:52.102366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-11-15 11:09:52.102381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.624 [2024-11-15 11:09:52.102386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.624 [2024-11-15 11:09:52.102536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.624 [2024-11-15 11:09:52.102693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.624 [2024-11-15 11:09:52.102703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.624 [2024-11-15 11:09:52.102708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.624 [2024-11-15 11:09:52.102713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.624 [2024-11-15 11:09:52.114528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.624 [2024-11-15 11:09:52.115090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-11-15 11:09:52.115120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.624 [2024-11-15 11:09:52.115129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.624 [2024-11-15 11:09:52.115295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.624 [2024-11-15 11:09:52.115449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.624 [2024-11-15 11:09:52.115455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.624 [2024-11-15 11:09:52.115460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.624 [2024-11-15 11:09:52.115466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.624 [2024-11-15 11:09:52.127191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.624 [2024-11-15 11:09:52.127806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-11-15 11:09:52.127837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.624 [2024-11-15 11:09:52.127846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.624 [2024-11-15 11:09:52.128015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.624 [2024-11-15 11:09:52.128168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.624 [2024-11-15 11:09:52.128174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.624 [2024-11-15 11:09:52.128180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.624 [2024-11-15 11:09:52.128186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.624 [2024-11-15 11:09:52.139868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.624 [2024-11-15 11:09:52.140365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-11-15 11:09:52.140380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.624 [2024-11-15 11:09:52.140386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.624 [2024-11-15 11:09:52.140536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.624 [2024-11-15 11:09:52.140692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.624 [2024-11-15 11:09:52.140699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.624 [2024-11-15 11:09:52.140704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.624 [2024-11-15 11:09:52.140712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.886 [2024-11-15 11:09:52.152535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.886 [2024-11-15 11:09:52.153109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.886 [2024-11-15 11:09:52.153139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.886 [2024-11-15 11:09:52.153147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.886 [2024-11-15 11:09:52.153313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.886 [2024-11-15 11:09:52.153466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.886 [2024-11-15 11:09:52.153472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.886 [2024-11-15 11:09:52.153478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.886 [2024-11-15 11:09:52.153484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.886 [2024-11-15 11:09:52.165168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.886 [2024-11-15 11:09:52.165690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.886 [2024-11-15 11:09:52.165720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.886 [2024-11-15 11:09:52.165729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.886 [2024-11-15 11:09:52.165897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.886 [2024-11-15 11:09:52.166050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.886 [2024-11-15 11:09:52.166056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.886 [2024-11-15 11:09:52.166062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.886 [2024-11-15 11:09:52.166068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.886 [2024-11-15 11:09:52.177767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.886 [2024-11-15 11:09:52.178345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.886 [2024-11-15 11:09:52.178375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.178383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.178550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.178711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.178718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.178723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.178729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.190411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.191038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.191068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.191076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.191242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.191396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.191402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.191407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.191413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.203103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.203605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.203626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.203632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.203788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.203940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.203946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.203951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.203956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.215778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.216355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.216385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.216393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.216559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.216720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.216726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.216732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.216738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.228416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.229040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.229070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.229079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.229248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.229402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.229408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.229414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.229420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.241105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.241602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.241624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.241630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.241787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.241938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.241944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.241949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.241954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.253804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.254267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.254280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.254286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.254436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.254591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.254597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.254603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.254607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.266414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.266993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.267024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.267032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.267198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.267351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.267361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.267367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.267372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.279070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.279642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.279673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.279681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.279850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.280003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.280010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.280016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.280021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.291706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.292322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.292352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.887 [2024-11-15 11:09:52.292361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.887 [2024-11-15 11:09:52.292527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.887 [2024-11-15 11:09:52.292687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.887 [2024-11-15 11:09:52.292695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.887 [2024-11-15 11:09:52.292700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.887 [2024-11-15 11:09:52.292706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.887 [2024-11-15 11:09:52.304392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.887 [2024-11-15 11:09:52.304866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.887 [2024-11-15 11:09:52.304897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.304905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.305072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.305225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.305231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.305237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.305250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.317108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.317642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.317672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.317681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.317848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.318001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.318008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.318013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.318019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.329703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.330266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.330296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.330305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.330471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.330631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.330638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.330644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.330650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.342331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.342903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.342934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.342942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.343108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.343261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.343267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.343273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.343278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.354956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.355516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.355549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.355558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.355731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.355885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.355891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.355896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.355902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.367576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.368163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.368194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.368202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.368368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.368521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.368528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.368533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.368539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.380231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.380850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.380880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.380889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.381055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.381208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.381214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.381220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.381225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.392917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.393390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.393405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.393410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.393570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.393722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.393727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.393732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.393737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.888 [2024-11-15 11:09:52.405564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.888 [2024-11-15 11:09:52.406123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.888 [2024-11-15 11:09:52.406153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:32.888 [2024-11-15 11:09:52.406162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:32.888 [2024-11-15 11:09:52.406327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:32.888 [2024-11-15 11:09:52.406481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.888 [2024-11-15 11:09:52.406487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.888 [2024-11-15 11:09:52.406493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.888 [2024-11-15 11:09:52.406499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.154 [2024-11-15 11:09:52.418184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.154 [2024-11-15 11:09:52.418701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.154 [2024-11-15 11:09:52.418731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.154 [2024-11-15 11:09:52.418740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.154 [2024-11-15 11:09:52.418909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.154 [2024-11-15 11:09:52.419062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.154 [2024-11-15 11:09:52.419068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.154 [2024-11-15 11:09:52.419073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.154 [2024-11-15 11:09:52.419079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.154 [2024-11-15 11:09:52.430908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.154 [2024-11-15 11:09:52.431471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.154 [2024-11-15 11:09:52.431501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.154 [2024-11-15 11:09:52.431510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.154 [2024-11-15 11:09:52.431684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.154 [2024-11-15 11:09:52.431839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.154 [2024-11-15 11:09:52.431849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.154 [2024-11-15 11:09:52.431855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.154 [2024-11-15 11:09:52.431861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.154 [2024-11-15 11:09:52.443539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.154 [2024-11-15 11:09:52.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.154 [2024-11-15 11:09:52.444154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.154 [2024-11-15 11:09:52.444162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.154 [2024-11-15 11:09:52.444328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.154 [2024-11-15 11:09:52.444481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.154 [2024-11-15 11:09:52.444488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.154 [2024-11-15 11:09:52.444493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.154 [2024-11-15 11:09:52.444499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 574365 Killed "${NVMF_APP[@]}" "$@" 00:29:33.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:33.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:33.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.154 [2024-11-15 11:09:52.456200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.154 [2024-11-15 11:09:52.456698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.154 [2024-11-15 11:09:52.456727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.154 [2024-11-15 11:09:52.456736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=576067 00:29:33.154 [2024-11-15 11:09:52.456905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.154 [2024-11-15 11:09:52.457059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.154 [2024-11-15 11:09:52.457067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.154 [2024-11-15 11:09:52.457073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.457079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 576067 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 576067 ']' 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.155 11:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.155 [2024-11-15 11:09:52.468916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.469482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.469512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.469521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.469696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.469851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.469857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.469863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.469869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 [2024-11-15 11:09:52.481558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.482145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.482175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.482184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.482351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.482505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.482511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.482516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.482522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 [2024-11-15 11:09:52.494211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.494664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.494680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.494686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.494836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.494986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.494992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.494997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.495006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 [2024-11-15 11:09:52.506924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.507376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.507406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.507415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.507591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.507745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.507753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.507758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.507764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 [2024-11-15 11:09:52.509879] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:33.155 [2024-11-15 11:09:52.509924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.155 [2024-11-15 11:09:52.519593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.520066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.520096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.520105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.520271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.520425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.520431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.520437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.520443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 [2024-11-15 11:09:52.532274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.532875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.532914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.533081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.533234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.533240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.533249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.533255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.155 [2024-11-15 11:09:52.544944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.155 [2024-11-15 11:09:52.545553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.155 [2024-11-15 11:09:52.545663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.155 [2024-11-15 11:09:52.545672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.155 [2024-11-15 11:09:52.545839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.155 [2024-11-15 11:09:52.545992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.155 [2024-11-15 11:09:52.545998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.155 [2024-11-15 11:09:52.546003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.155 [2024-11-15 11:09:52.546009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.557553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.558049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.558063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.558069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.558220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.558370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.558376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.558381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.558386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.570268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.570883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.570913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.570922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.571089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.571242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.571248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.571254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.571260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.582965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.583420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.583434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.583441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.583596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.583747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.583752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.583758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.583763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.595583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.596168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.596198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.596207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.596373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.596527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.596533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.596539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.596544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.601551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:33.156 [2024-11-15 11:09:52.608244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.608839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.608870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.608879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.609046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.609201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.609207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.609213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.609219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.620914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.621502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.621533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.621545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.621720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.621875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.621881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.621887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.621893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.631240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.156 [2024-11-15 11:09:52.631261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.156 [2024-11-15 11:09:52.631268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.156 [2024-11-15 11:09:52.631274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.156 [2024-11-15 11:09:52.631278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.156 [2024-11-15 11:09:52.632367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.156 [2024-11-15 11:09:52.632518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.156 [2024-11-15 11:09:52.632520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.156 [2024-11-15 11:09:52.633580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.634098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.634127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.634136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.634305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.634459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.634465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.634471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.156 [2024-11-15 11:09:52.634477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.156 [2024-11-15 11:09:52.646183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.156 [2024-11-15 11:09:52.646670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.156 [2024-11-15 11:09:52.646701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.156 [2024-11-15 11:09:52.646710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.156 [2024-11-15 11:09:52.646880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.156 [2024-11-15 11:09:52.647033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.156 [2024-11-15 11:09:52.647040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.156 [2024-11-15 11:09:52.647051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.157 [2024-11-15 11:09:52.647056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.157 [2024-11-15 11:09:52.658893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.157 [2024-11-15 11:09:52.659456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.157 [2024-11-15 11:09:52.659487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.157 [2024-11-15 11:09:52.659495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.157 [2024-11-15 11:09:52.659669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.157 [2024-11-15 11:09:52.659824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.157 [2024-11-15 11:09:52.659830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.157 [2024-11-15 11:09:52.659835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.157 [2024-11-15 11:09:52.659841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.157 [2024-11-15 11:09:52.671529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.157 [2024-11-15 11:09:52.672077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.157 [2024-11-15 11:09:52.672109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.157 [2024-11-15 11:09:52.672118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.157 [2024-11-15 11:09:52.672285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.157 [2024-11-15 11:09:52.672438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.157 [2024-11-15 11:09:52.672445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.157 [2024-11-15 11:09:52.672450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.157 [2024-11-15 11:09:52.672456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.504 [2024-11-15 11:09:52.684161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.504 [2024-11-15 11:09:52.684607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.504 [2024-11-15 11:09:52.684622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.504 [2024-11-15 11:09:52.684629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.504 [2024-11-15 11:09:52.684779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.504 [2024-11-15 11:09:52.684930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.504 [2024-11-15 11:09:52.684935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.504 [2024-11-15 11:09:52.684941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.504 [2024-11-15 11:09:52.684946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.504 [2024-11-15 11:09:52.696773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.504 [2024-11-15 11:09:52.697139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.504 [2024-11-15 11:09:52.697152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.504 [2024-11-15 11:09:52.697157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.504 [2024-11-15 11:09:52.697307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.504 [2024-11-15 11:09:52.697457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.504 [2024-11-15 11:09:52.697463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.504 [2024-11-15 11:09:52.697468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.504 [2024-11-15 11:09:52.697473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.504 [2024-11-15 11:09:52.709460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.504 [2024-11-15 11:09:52.709888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.709901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.709907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.710057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.710206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.710212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.710217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.710221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 [2024-11-15 11:09:52.722071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.722614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.722645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.722653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.722823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.722976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.722982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.722988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.722993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 [2024-11-15 11:09:52.734683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.735233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.735263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.735275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.735442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.735602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.735609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.735615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.735620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 [2024-11-15 11:09:52.747305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.747747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.747762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.747768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.747919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.748070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.748075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.748081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.748086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 [2024-11-15 11:09:52.759947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.760539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.760576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.760585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.760754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.760908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.760914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.760920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.760926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 [2024-11-15 11:09:52.772620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.773183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.773213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.773222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.773389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.773546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.773552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.773557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.773569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 4934.17 IOPS, 19.27 MiB/s [2024-11-15T10:09:53.032Z] [2024-11-15 11:09:52.785254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.785699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.785730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.785739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.785908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.786061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.786068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.786074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.786080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.505 [2024-11-15 11:09:52.797910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.505 [2024-11-15 11:09:52.798395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.505 [2024-11-15 11:09:52.798425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.505 [2024-11-15 11:09:52.798434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.505 [2024-11-15 11:09:52.798610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.505 [2024-11-15 11:09:52.798764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.505 [2024-11-15 11:09:52.798771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.505 [2024-11-15 11:09:52.798776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.505 [2024-11-15 11:09:52.798782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.810621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.811102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.811131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.811140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.811306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.811459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.811465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.811475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.811480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.823311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.823693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.823723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.823733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.823902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.824055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.824061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.824066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.824072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.836045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.836598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.836629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.836638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.836807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.836960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.836966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.836972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.836977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.848678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.849218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.849249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.849257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.849424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.849584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.849591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.849597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.849602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.861288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.861683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.861713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.861722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.861891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.862044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.862050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.862055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.862061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.873899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.874459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.874489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.874498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.874668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.874822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.874828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.874834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.874840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.886537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.887112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.887122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.887288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.887441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.887448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.887455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.887462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.899159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.899639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.899670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.899682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.899849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.900003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.900009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.506 [2024-11-15 11:09:52.900015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.506 [2024-11-15 11:09:52.900020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.506 [2024-11-15 11:09:52.911871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.506 [2024-11-15 11:09:52.912318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.506 [2024-11-15 11:09:52.912334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.506 [2024-11-15 11:09:52.912340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.506 [2024-11-15 11:09:52.912490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.506 [2024-11-15 11:09:52.912646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.506 [2024-11-15 11:09:52.912653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.912658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.912664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.507 [2024-11-15 11:09:52.924487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.507 [2024-11-15 11:09:52.925088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-11-15 11:09:52.925118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-11-15 11:09:52.925127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.507 [2024-11-15 11:09:52.925293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.507 [2024-11-15 11:09:52.925447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.507 [2024-11-15 11:09:52.925453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.925459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.925464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.507 [2024-11-15 11:09:52.937166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.507 [2024-11-15 11:09:52.937821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-11-15 11:09:52.937851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-11-15 11:09:52.937861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.507 [2024-11-15 11:09:52.938027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.507 [2024-11-15 11:09:52.938187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.507 [2024-11-15 11:09:52.938194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.938200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.938206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.507 [2024-11-15 11:09:52.949784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.507 [2024-11-15 11:09:52.950363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-11-15 11:09:52.950394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-11-15 11:09:52.950403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.507 [2024-11-15 11:09:52.950576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.507 [2024-11-15 11:09:52.950730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.507 [2024-11-15 11:09:52.950736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.950742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.950747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.507 [2024-11-15 11:09:52.962437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.507 [2024-11-15 11:09:52.962886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-11-15 11:09:52.962916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-11-15 11:09:52.962925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.507 [2024-11-15 11:09:52.963092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.507 [2024-11-15 11:09:52.963245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.507 [2024-11-15 11:09:52.963252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.963257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.963263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.507 [2024-11-15 11:09:52.975109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.507 [2024-11-15 11:09:52.975669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-11-15 11:09:52.975700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-11-15 11:09:52.975708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.507 [2024-11-15 11:09:52.975877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.507 [2024-11-15 11:09:52.976031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.507 [2024-11-15 11:09:52.976037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.976042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.976051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.507 [2024-11-15 11:09:52.987761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.507 [2024-11-15 11:09:52.988222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-11-15 11:09:52.988236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-11-15 11:09:52.988242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.507 [2024-11-15 11:09:52.988392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.507 [2024-11-15 11:09:52.988542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.507 [2024-11-15 11:09:52.988547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.507 [2024-11-15 11:09:52.988552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.507 [2024-11-15 11:09:52.988557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.000392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.000744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.000757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.000763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.841 [2024-11-15 11:09:53.000912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.841 [2024-11-15 11:09:53.001062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.841 [2024-11-15 11:09:53.001068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.841 [2024-11-15 11:09:53.001074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.841 [2024-11-15 11:09:53.001078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.013066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.013424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.013436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.013441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.841 [2024-11-15 11:09:53.013595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.841 [2024-11-15 11:09:53.013746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.841 [2024-11-15 11:09:53.013751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.841 [2024-11-15 11:09:53.013757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.841 [2024-11-15 11:09:53.013761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.025725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.026268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.026298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.026307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.841 [2024-11-15 11:09:53.026473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.841 [2024-11-15 11:09:53.026633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.841 [2024-11-15 11:09:53.026640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.841 [2024-11-15 11:09:53.026645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.841 [2024-11-15 11:09:53.026651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.038341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.038833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.038850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.038855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.841 [2024-11-15 11:09:53.039006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.841 [2024-11-15 11:09:53.039156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.841 [2024-11-15 11:09:53.039162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.841 [2024-11-15 11:09:53.039166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.841 [2024-11-15 11:09:53.039171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.051002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.051450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.051463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.051469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.841 [2024-11-15 11:09:53.051623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.841 [2024-11-15 11:09:53.051774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.841 [2024-11-15 11:09:53.051780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.841 [2024-11-15 11:09:53.051785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.841 [2024-11-15 11:09:53.051790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.063608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.063953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.063965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.063970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.841 [2024-11-15 11:09:53.064124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.841 [2024-11-15 11:09:53.064274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.841 [2024-11-15 11:09:53.064279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.841 [2024-11-15 11:09:53.064284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.841 [2024-11-15 11:09:53.064289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.841 [2024-11-15 11:09:53.076259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.841 [2024-11-15 11:09:53.076724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.841 [2024-11-15 11:09:53.076738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.841 [2024-11-15 11:09:53.076743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.076893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.077042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.077048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.077053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.077057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.088892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.089459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.089490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.089498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.089671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.089825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.089831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.089837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.089842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.101533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.101994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.102025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.102034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.102200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.102354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.102364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.102369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.102375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.114211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.114683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.114714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.114723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.114892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.115045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.115052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.115058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.115063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.126908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.127440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.127470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.127479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.127652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.127806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.127812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.127817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.127823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.139510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.139963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.139979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.139985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.140135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.140285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.140291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.140297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.140305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.152137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.152581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.152597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.152603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.152755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.152906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.152912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.152917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.152922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.164750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.165191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.165221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.165230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.165397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.165551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.165558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.165570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.165577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.842 [2024-11-15 11:09:53.177439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.842 [2024-11-15 11:09:53.178019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.842 [2024-11-15 11:09:53.178049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.842 [2024-11-15 11:09:53.178058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.842 [2024-11-15 11:09:53.178235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.842 [2024-11-15 11:09:53.178389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.842 [2024-11-15 11:09:53.178395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.842 [2024-11-15 11:09:53.178401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.842 [2024-11-15 11:09:53.178407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.190101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.190547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.190566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.190572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.190722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.190872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.190878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.190883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.190888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.202727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.203282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.203313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.203321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.203488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.203647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.203656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.203662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.203668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.215360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.215901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.215931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.215940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.216107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.216261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.216267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.216272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.216278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.227974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.228418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.228434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.228440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.228599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.228752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.228758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.228763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.228768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.240602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.241168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.241198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.241207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.241373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.241527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.241534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.241539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.241545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.253244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.253849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.253881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.253891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.254060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.254215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.254221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.254227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.254233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.265932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.266550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.266587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.266596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.266765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.266919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.266930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.266936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.266941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.278637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.279093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.279099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.279249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.843 [2024-11-15 11:09:53.279399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.843 [2024-11-15 11:09:53.279406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.843 [2024-11-15 11:09:53.279411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.843 [2024-11-15 11:09:53.279416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.843 [2024-11-15 11:09:53.291246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.843 [2024-11-15 11:09:53.291717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.843 [2024-11-15 11:09:53.291748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.843 [2024-11-15 11:09:53.291756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.843 [2024-11-15 11:09:53.291925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.844 [2024-11-15 11:09:53.292078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.844 [2024-11-15 11:09:53.292084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.844 [2024-11-15 11:09:53.292091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.844 [2024-11-15 11:09:53.292097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.844 [2024-11-15 11:09:53.303942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.844 [2024-11-15 11:09:53.304478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.844 [2024-11-15 11:09:53.304509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.844 [2024-11-15 11:09:53.304518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.844 [2024-11-15 11:09:53.304693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.844 [2024-11-15 11:09:53.304847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.844 [2024-11-15 11:09:53.304854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.844 [2024-11-15 11:09:53.304859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.844 [2024-11-15 11:09:53.304868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.844 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.844 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:33.844 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.844 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.844 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.844 [2024-11-15 11:09:53.316567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.844 [2024-11-15 11:09:53.317024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.844 [2024-11-15 11:09:53.317038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.844 [2024-11-15 11:09:53.317044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.844 [2024-11-15 11:09:53.317195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.844 [2024-11-15 11:09:53.317345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.844 [2024-11-15 11:09:53.317352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.844 [2024-11-15 11:09:53.317359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.844 [2024-11-15 11:09:53.317365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.844 [2024-11-15 11:09:53.329202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.844 [2024-11-15 11:09:53.329805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.844 [2024-11-15 11:09:53.329836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:33.844 [2024-11-15 11:09:53.329845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:33.844 [2024-11-15 11:09:53.330011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:33.844 [2024-11-15 11:09:53.330166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.844 [2024-11-15 11:09:53.330172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.844 [2024-11-15 11:09:53.330178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.844 [2024-11-15 11:09:53.330184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.147 [2024-11-15 11:09:53.341912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.147 [2024-11-15 11:09:53.342365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.147 [2024-11-15 11:09:53.342380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.147 [2024-11-15 11:09:53.342386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.147 [2024-11-15 11:09:53.342536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.147 [2024-11-15 11:09:53.342693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.147 [2024-11-15 11:09:53.342699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.147 [2024-11-15 11:09:53.342708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.147 [2024-11-15 11:09:53.342713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.147 [2024-11-15 11:09:53.354551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.147 [2024-11-15 11:09:53.354821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.147 [2024-11-15 11:09:53.354835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.147 [2024-11-15 11:09:53.354840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.147 [2024-11-15 11:09:53.354990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.147 [2024-11-15 11:09:53.355139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.147 [2024-11-15 11:09:53.355146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.147 [2024-11-15 11:09:53.355151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.147 [2024-11-15 11:09:53.355155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.147 [2024-11-15 11:09:53.357169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.147 [2024-11-15 11:09:53.367262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.147 [2024-11-15 11:09:53.367679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.147 [2024-11-15 11:09:53.367709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.147 [2024-11-15 11:09:53.367718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.147 [2024-11-15 11:09:53.367887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.147 [2024-11-15 11:09:53.368040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.147 [2024-11-15 11:09:53.368047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.147 [2024-11-15 11:09:53.368052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.147 [2024-11-15 11:09:53.368058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.147 [2024-11-15 11:09:53.379912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.147 [2024-11-15 11:09:53.380364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.147 [2024-11-15 11:09:53.380379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.147 [2024-11-15 11:09:53.380388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.147 [2024-11-15 11:09:53.380539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.147 [2024-11-15 11:09:53.380695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.147 [2024-11-15 11:09:53.380702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.147 [2024-11-15 11:09:53.380707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.147 [2024-11-15 11:09:53.380712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.147 [2024-11-15 11:09:53.392533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.147 [2024-11-15 11:09:53.392988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.147 [2024-11-15 11:09:53.393002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.147 [2024-11-15 11:09:53.393007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.147 [2024-11-15 11:09:53.393157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.147 [2024-11-15 11:09:53.393307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.147 [2024-11-15 11:09:53.393313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.147 [2024-11-15 11:09:53.393318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.147 [2024-11-15 11:09:53.393322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.147 Malloc0 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.147 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.147 [2024-11-15 11:09:53.405157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.147 [2024-11-15 11:09:53.405605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.147 [2024-11-15 11:09:53.405618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.148 [2024-11-15 11:09:53.405623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.148 [2024-11-15 11:09:53.405773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.148 [2024-11-15 11:09:53.405923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.148 [2024-11-15 11:09:53.405929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.148 [2024-11-15 11:09:53.405934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.148 [2024-11-15 11:09:53.405939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.148 [2024-11-15 11:09:53.417767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.148 [2024-11-15 11:09:53.418207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.148 [2024-11-15 11:09:53.418219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e000 with addr=10.0.0.2, port=4420 00:29:34.148 [2024-11-15 11:09:53.418224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e000 is same with the state(6) to be set 00:29:34.148 [2024-11-15 11:09:53.418374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e000 (9): Bad file descriptor 00:29:34.148 [2024-11-15 11:09:53.418524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:34.148 [2024-11-15 11:09:53.418530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:34.148 [2024-11-15 11:09:53.418534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:34.148 [2024-11-15 11:09:53.418539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.148 [2024-11-15 11:09:53.429081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.148 [2024-11-15 11:09:53.430369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.148 11:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 575002 00:29:34.148 [2024-11-15 11:09:53.458049] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:35.349 4813.57 IOPS, 18.80 MiB/s [2024-11-15T10:09:55.815Z] 5813.88 IOPS, 22.71 MiB/s [2024-11-15T10:09:57.197Z] 6594.22 IOPS, 25.76 MiB/s [2024-11-15T10:09:58.136Z] 7219.00 IOPS, 28.20 MiB/s [2024-11-15T10:09:59.078Z] 7738.91 IOPS, 30.23 MiB/s [2024-11-15T10:10:00.019Z] 8158.67 IOPS, 31.87 MiB/s [2024-11-15T10:10:00.960Z] 8545.23 IOPS, 33.38 MiB/s [2024-11-15T10:10:01.899Z] 8855.07 IOPS, 34.59 MiB/s [2024-11-15T10:10:01.899Z] 9114.33 IOPS, 35.60 MiB/s 00:29:42.372 Latency(us) 00:29:42.372 [2024-11-15T10:10:01.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.372 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.372 Verification LBA range: start 0x0 length 0x4000 00:29:42.373 Nvme1n1 : 15.01 9117.21 35.61 13264.45 0.00 5700.11 552.96 15400.96 00:29:42.373 [2024-11-15T10:10:01.900Z] =================================================================================================================== 00:29:42.373 [2024-11-15T10:10:01.900Z] Total : 9117.21 35.61 13264.45 0.00 5700.11 552.96 15400.96 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.632 rmmod nvme_tcp 00:29:42.632 rmmod nvme_fabrics 00:29:42.632 rmmod nvme_keyring 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 576067 ']' 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 576067 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 576067 ']' 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 576067 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:42.632 11:10:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 576067 00:29:42.632 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:42.632 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:42.632 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 576067' 00:29:42.632 killing process with pid 576067 00:29:42.632 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 576067 00:29:42.632 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 576067 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.893 11:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.810 00:29:44.810 real 0m28.140s 00:29:44.810 user 1m2.829s 00:29:44.810 sys 0m7.709s 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.810 ************************************ 00:29:44.810 END TEST nvmf_bdevperf 00:29:44.810 ************************************ 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.810 ************************************ 00:29:44.810 START TEST nvmf_target_disconnect 00:29:44.810 ************************************ 00:29:44.810 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:45.071 * Looking for test storage... 00:29:45.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.071 --rc genhtml_branch_coverage=1 00:29:45.071 --rc genhtml_function_coverage=1 00:29:45.071 --rc genhtml_legend=1 00:29:45.071 --rc geninfo_all_blocks=1 00:29:45.071 --rc geninfo_unexecuted_blocks=1 00:29:45.071 00:29:45.071 ' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.071 --rc genhtml_branch_coverage=1 00:29:45.071 --rc genhtml_function_coverage=1 00:29:45.071 --rc genhtml_legend=1 00:29:45.071 --rc geninfo_all_blocks=1 00:29:45.071 --rc geninfo_unexecuted_blocks=1 00:29:45.071 00:29:45.071 ' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.071 --rc genhtml_branch_coverage=1 00:29:45.071 --rc genhtml_function_coverage=1 00:29:45.071 --rc genhtml_legend=1 00:29:45.071 --rc geninfo_all_blocks=1 00:29:45.071 --rc geninfo_unexecuted_blocks=1 00:29:45.071 00:29:45.071 ' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.071 --rc genhtml_branch_coverage=1 00:29:45.071 --rc genhtml_function_coverage=1 00:29:45.071 --rc genhtml_legend=1 00:29:45.071 --rc geninfo_all_blocks=1 00:29:45.071 --rc geninfo_unexecuted_blocks=1 00:29:45.071 00:29:45.071 ' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.071 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.072 11:10:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:53.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:53.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.219 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:53.220 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:53.220 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.220 11:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:29:53.220 00:29:53.220 --- 10.0.0.2 ping statistics --- 00:29:53.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.220 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:29:53.220 00:29:53.220 --- 10.0.0.1 ping statistics --- 00:29:53.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.220 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.220 ************************************ 00:29:53.220 START TEST nvmf_target_disconnect_tc1 00:29:53.220 ************************************ 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.220 [2024-11-15 11:10:12.304398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.220 [2024-11-15 11:10:12.304501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22ad0 with addr=10.0.0.2, port=4420 00:29:53.220 [2024-11-15 11:10:12.304529] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:53.220 [2024-11-15 11:10:12.304542] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:53.220 [2024-11-15 11:10:12.304549] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:53.220 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:53.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:53.220 Initializing NVMe Controllers 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:53.220 00:29:53.220 real 0m0.145s 00:29:53.220 user 0m0.060s 00:29:53.220 sys 0m0.085s 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.220 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:53.221 ************************************ 00:29:53.221 END TEST nvmf_target_disconnect_tc1 00:29:53.221 ************************************ 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.221 ************************************ 00:29:53.221 START TEST nvmf_target_disconnect_tc2 00:29:53.221 ************************************ 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=582118 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 582118 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 582118 ']' 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:53.221 11:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.221 [2024-11-15 11:10:12.471841] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:53.221 [2024-11-15 11:10:12.471900] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.221 [2024-11-15 11:10:12.571719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.221 [2024-11-15 11:10:12.624461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.221 [2024-11-15 11:10:12.624512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.221 [2024-11-15 11:10:12.624520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.221 [2024-11-15 11:10:12.624527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.221 [2024-11-15 11:10:12.624534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.221 [2024-11-15 11:10:12.626635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:53.221 [2024-11-15 11:10:12.626807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:53.221 [2024-11-15 11:10:12.626943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:53.221 [2024-11-15 11:10:12.626945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:53.796 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:53.796 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:53.796 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.796 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.796 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 Malloc0 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 [2024-11-15 11:10:13.384834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 [2024-11-15 11:10:13.425253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=582467 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:54.062 11:10:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.983 11:10:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 582118 00:29:55.983 11:10:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 [2024-11-15 11:10:15.463427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Write completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 Read completed with error (sct=0, sc=8) 00:29:55.983 starting I/O failed 00:29:55.983 [2024-11-15 11:10:15.463708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:55.983 [2024-11-15 11:10:15.464133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.464162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.464467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.464478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.464865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.464919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.465085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.465108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.465328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.465341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.465790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.465846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.466180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.466194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.466551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.466569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.466923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.466935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.467302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-15 11:10:15.467314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-15 11:10:15.467803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.467858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.467983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.467996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.468412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.468424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.468764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.468777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.468981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.468993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.469278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.469290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.469600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.469613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.470053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.470066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.470366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.470378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.470488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.470499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.470701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.470713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.470915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.470926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.471136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.471148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.471457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.471469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.471835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.471847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.472204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.472216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.472512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.472523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.472890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.472902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.473262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.473274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.473493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.473505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.473857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.473870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.474193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.474205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.475628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.475641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.475954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.475966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.476315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.476327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.476638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.476648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.476943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.476953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.477294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.477304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.477687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.477700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.478024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.478034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.478381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.478392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.478714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.478724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.479104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.479116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.479413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.479426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.479632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.479647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.479985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.479995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.480291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.480302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.480671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-15 11:10:15.480682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-15 11:10:15.481084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.481094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.481462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.481471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.481866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.481877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.482200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.482211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.482549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.482559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.482885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.483223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.483234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.483553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.483568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.483755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.483766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.484107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.484117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.484480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.484491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.484810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.484820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.485127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.485139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.485478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.485488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.485900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.485910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.486253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.486264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.486553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.486882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.486893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.487195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.487206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.487508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.487521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.487851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.487863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.488247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.488258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.488569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.488582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.488931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.488943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.489272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.489284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.489590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.489604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.489937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.489949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.490273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.490605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.490617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.490938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.490950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.491254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.491265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.491489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.491500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.491858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.491870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.492097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.492108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.492414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.492426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.492756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.492771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.493055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.493067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-15 11:10:15.493376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-15 11:10:15.493387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.493754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.493773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.494066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.494078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.494385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.494398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.494724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.494735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.495042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.495362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.495374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.495701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.495714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.496038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.496049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.496361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.496372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.496690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.496701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.496918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.496929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.497250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.497558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.497580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.497903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.497916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.498274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.498286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.498595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.498609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.499035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.499050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.499357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.499372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.499707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.499722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.500059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.500073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.500388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.500403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.500723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.500738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.500955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.500970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.501310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.501324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.501544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.501559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.501908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.501923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.502247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.502262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.502587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.502603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.502935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.502951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.503189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.503204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.503533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.503548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.503824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.503839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.504201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.504215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.504357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.504372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.504681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.504697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.505009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.505024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.505327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.505344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-15 11:10:15.505577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-15 11:10:15.505597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.505938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.505953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.506164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.506182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.506484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.506499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.506756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.506771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.507092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.507108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.507433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.507449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-15 11:10:15.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-15 11:10:15.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:56.260 [2024-11-15 11:10:15.508138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.260 [2024-11-15 11:10:15.508156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.260 qpair failed and we were unable to recover it. 00:29:56.260 [2024-11-15 11:10:15.508470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.260 [2024-11-15 11:10:15.508486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.260 qpair failed and we were unable to recover it. 00:29:56.260 [2024-11-15 11:10:15.508815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.260 [2024-11-15 11:10:15.508832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.260 qpair failed and we were unable to recover it. 00:29:56.260 [2024-11-15 11:10:15.509165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.260 [2024-11-15 11:10:15.509181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.260 qpair failed and we were unable to recover it. 00:29:56.260 [2024-11-15 11:10:15.509539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.260 [2024-11-15 11:10:15.509555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.260 qpair failed and we were unable to recover it. 00:29:56.260 [2024-11-15 11:10:15.509933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.509950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.510289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.510305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.510640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.510657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.510974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.510989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.511216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.511231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.511560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.511580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.511905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.511920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.512249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.512268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.512593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.512614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.512950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.512969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.513311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.513331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.513654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.513674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.514029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.514049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.514391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.514410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.514753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.514774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.515111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.515130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.515455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.515475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.515832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.515852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.516243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.516263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.516552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.516579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.516896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.516915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.517245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.517272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.517650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.517671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.518366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.518385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.518717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.518738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.519169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.519188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.519518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.519543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.519934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.519954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.520277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.520298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.520672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.520692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.521035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.521056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.521426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.261 [2024-11-15 11:10:15.521445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.261 qpair failed and we were unable to recover it. 00:29:56.261 [2024-11-15 11:10:15.521760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.521780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.521990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.522011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.522342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.522361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.522702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.522723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.523044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.523063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.523415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.523741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.523761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.524101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.524127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.524493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.524519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.524767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.524794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.525171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.525196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.525578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.525605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.525898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.525923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.526301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.526327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.526681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.526709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.527076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.527102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.527466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.527492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.527868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.527894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.528271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.528297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.528654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.528682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.529047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.529073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.529434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.529462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.529797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.529823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.530186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.530212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.530580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.530972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.530998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.531341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.531368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.531707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.531734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.532098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.532124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.532491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.532517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.532881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.532907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.533273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.262 [2024-11-15 11:10:15.533299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.262 qpair failed and we were unable to recover it. 00:29:56.262 [2024-11-15 11:10:15.533765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.533792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.534144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.534169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.534546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.534587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.534943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.534970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.535327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.535352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.535693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.535721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.535944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.535974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.536320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.536351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.536699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.536730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.537117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.537147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.537546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.537598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.538017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.538047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.538374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.538403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.538649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.538679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.539058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.539086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.539463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.539492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.539735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.539768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.540127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.540157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.540515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.540545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.540968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.540999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.541372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.541401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.541754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.541792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.542081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.542110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.542466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.542495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.542860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.542890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.543244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.543274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.543635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.543666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.544038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.544066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.544441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.544470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.544832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.263 [2024-11-15 11:10:15.544862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.263 qpair failed and we were unable to recover it. 00:29:56.263 [2024-11-15 11:10:15.545212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.545240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.545599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.545629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.545991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.546020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.546384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.546412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.546680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.546711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.546951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.546982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.547338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.547368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.547750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.547781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.548143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.548172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.548530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.548559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.548950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.548979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.549328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.549723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.549761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.550022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.550051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.550414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.550443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.550830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.550861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.551224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.551253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.551589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.551620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.551982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.552011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.552370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.552399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.552768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.552799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.553153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.553181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.553544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.553582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.553930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.553959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.554213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.554245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.554607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.554637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.555012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.555041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.555414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.555442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.555789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.555821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.556191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.556221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.556589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.556619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.556873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.556901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.557259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.557289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.264 qpair failed and we were unable to recover it. 00:29:56.264 [2024-11-15 11:10:15.557762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.264 [2024-11-15 11:10:15.557792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.558043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.558075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.558449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.558478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.558825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.558856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.559232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.559262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.559621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.559652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.560005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.560035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.560404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.560433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.560684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.560714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.560999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.561028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.561394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.561424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.561870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.561900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.562301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.562331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.562685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.562715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.563072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.563101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.563468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.563496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.563858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.563888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.564318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.564347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.564579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.564609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.564992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.565026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.565385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.565415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.565797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.565827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.566184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.566214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.566581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.566611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.567036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.567065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.567390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.567418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.567794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.567825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.568191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.568220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.568580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.568610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.568969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.568998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.265 qpair failed and we were unable to recover it. 00:29:56.265 [2024-11-15 11:10:15.569366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.265 [2024-11-15 11:10:15.569394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.569759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.569791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.570164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.570193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.570572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.570604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.570950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.570979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.571244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.571272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.571622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.571653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.572020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.572049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.572395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.572758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.572788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.573126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.573157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.573518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.573546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.573915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.573947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.574349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.574722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.574752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.575048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.575077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.575411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.575440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.575797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.575829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.576185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.576214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.576578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.576609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.576978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.577009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.577373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.577403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.577764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.577793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.578164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.578193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.578438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.578470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.578751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.578782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.579135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.579163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-15 11:10:15.579514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-15 11:10:15.579543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.579919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.579949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.580295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.580325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.580662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.580694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.581036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.581066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.581448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.581718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.581748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.582127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.582484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.582513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.582881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.582911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.583283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.583312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.583673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.583703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.583923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.583955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.584309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.584339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.584703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.584734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.585104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.585133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.585498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.585528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.585894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.585924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.586262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.586292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.586656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.586687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.587029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.587058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.587428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.587458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.587755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.587785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.588179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.588208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.588456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.588485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.588840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.588871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.589244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.589275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.589650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.589681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.590049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.590079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.590438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.590473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.590835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.590866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.591226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.591256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.591618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.591648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.592027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.592057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-15 11:10:15.592495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-15 11:10:15.592525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.592893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.593280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.593310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.593682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.593713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.594055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.594085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.594433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.594461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.594838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.594869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.595140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.595168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.595582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.595613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.596025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.596054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.596429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.596458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.596828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.596857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.597206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.597236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.597612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.597643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.598009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.598038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.598421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.598449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.598799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.598829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.599190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.599218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.599586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.599617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.599980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.600009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.600259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.600287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.600635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.600665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.601105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.601134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.601392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.601421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.601783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.601815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.602166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.602196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.602521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.602550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.602914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.602944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.603312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.603340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-15 11:10:15.603711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-15 11:10:15.603741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.604090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.604120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.604482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.604511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.604872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.604902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.605259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.605287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.605628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.605660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.606020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.606055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.606410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.606439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.606840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.606870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.607154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.607183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.607547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.607584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.607984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.608013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.608366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.608396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.608751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.608782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.609133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.609162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.609527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.609556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.609971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.610225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.610257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.610625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.610657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.611060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.611091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.611444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.611474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.611838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.611868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.612197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.612227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.612578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.612609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.612974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.613004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.613363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.613392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.613752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.613783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.614148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.614177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.614412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.614443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.614798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.614833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.615190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.615219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.615583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.615614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-15 11:10:15.615978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-15 11:10:15.616007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.616261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.616294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.616655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.616688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.617032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.617063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.617413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.617442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.617692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.617725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.618090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.618119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.618483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.618511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.618878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.618908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.619139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.619168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.619583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.619614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.620014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.620044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.620304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.620335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.620558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.620605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.620863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.620905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.621288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.621318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.621686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.621718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.622117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.622146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.622516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.622546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.622898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.622928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.623224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.623252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.623606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.623637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.623917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.623947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.624187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.624582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.624613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.625015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.625046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.625411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.625441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.625785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.625816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.626175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.626204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.626610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.626642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.627038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.627068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.627349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.627377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.627745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.627776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.628141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.628171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.628537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.628579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.628941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.628971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.629229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.629260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.629645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-15 11:10:15.629677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-15 11:10:15.630059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.630090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.630461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.630490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.630860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.630891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.631261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.631294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.631653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.631683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.632073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.632105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.632449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.632480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.632823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.632853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.633217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.633246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.633612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.633643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.633992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.634021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.634393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.634422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.634706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.635101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.635130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.635496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.635525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.635969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.636001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.636365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.636400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.636755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.636787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.637151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.637180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.637436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.637467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.637720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.637753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.638171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.638524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.638893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.638923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.639276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-15 11:10:15.639306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-15 11:10:15.639662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.639693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.640039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.640068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.640408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.640437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.640794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.640824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.641187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.641217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.641593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.641625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.641974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.642005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.642386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.642415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.642746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.642777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.643128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.643157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.643529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.643558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.643949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.643979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.644357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.644386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.644761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.644791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.645151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.645179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.645442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.645471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.645827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.645858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.646113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.646142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.646510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.646539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.646965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.646995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.647357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.647387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.647748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.647778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.648141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.648170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.648531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.648578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.648915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.648944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.649308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.649337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.649782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.649812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.650168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.650196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.650543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.650584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.650969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.650998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.651353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.651381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-15 11:10:15.651769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-15 11:10:15.651805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.652183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.652212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.652497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.652525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.652927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.652957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.653312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.653342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.653698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.653729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.654097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.654125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.654495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.654525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.654882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.654913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.655280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.655309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.655674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.655704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.656043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.656071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.656436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.656465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.656814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.656844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.657206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.657235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.657596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.657627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.657985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.658013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.658379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.658408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.658770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.658800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.659163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.659192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.659611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.659640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.659985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.660015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.660414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.660780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.660810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.661184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.661213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.661476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.661505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.661893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.661923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.662285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.662314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.662699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.662729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.662980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.663012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.663350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.663380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.663633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.663667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.663913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.663941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.664292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.664321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-15 11:10:15.664684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-15 11:10:15.664714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.664967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.664996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.665265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.665293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.665667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.665697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.665948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.665977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.666217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.666250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.666598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.666637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.666906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.666935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.667309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.667338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.667698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.667727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.668074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.668103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.668472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.668500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.668861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.668890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.669225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.669255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.669614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.669667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.670028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.670057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.670416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.670446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.670800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.670830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.671194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.671224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.671581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.671612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.671982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.672011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.672377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.672406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.672662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.672692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.673077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.673106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.673469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.673497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.673851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.673881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.674248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.674277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.674643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.674674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.675053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.675082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.675426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.675454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.675797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.675828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.676187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.676216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.676588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-15 11:10:15.676961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-15 11:10:15.676999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.677363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.677392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.677748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.677778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.678091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.678119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.678481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.678510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.678868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.678898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.679260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.679289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.679643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.679674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.680019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.680403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.680431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.680722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.680752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.681116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.681145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.681507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.681536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.681957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.682000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.682321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.682351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.682690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.683100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.683128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.683492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.683522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.683888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.683920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.684323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.684351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.684692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.684723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.685088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.685118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.685517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.685546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.685931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.685960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.686316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.686695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.686726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.687081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.687111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.687473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.687503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.687886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.687917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.688254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.688285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.688640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-15 11:10:15.688671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-15 11:10:15.689023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.689052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.689428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.689457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.689717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.689747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.690138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.690167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.690557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.690817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.690850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.691225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.691255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.691619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.691649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.692019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.692049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.692427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.692456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.692884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.692914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.693257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.693287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.693653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.693683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.694043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.694072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.694434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.694462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.694815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.694845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.695099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.695127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.695492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.695522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.695884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.695915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.696284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.696314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.696681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.696712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.697083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.697112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.697469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.697509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.697901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.697931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.698296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.698326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.698577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.698609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.698961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.698990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.699287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.699316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.699726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.699756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-15 11:10:15.700114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-15 11:10:15.700143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.700503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.700532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.700887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.700918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.701286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.701316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.701687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.701719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.702087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.702116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.702500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.702529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.702891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.702920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.703300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.703676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.703707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.704071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.704100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.704442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.704473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.704847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.704877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.705252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.705282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.705649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.705681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.706009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.706044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.706382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.706410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.706797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.706827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.707187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.707216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.707595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.707625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.707971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.708002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.708372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.708401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.708734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.708766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.709140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.709170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.709540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.709580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.709943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.709972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.710211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.710240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.710616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.710647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.711028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.711057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.711338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.711367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.711728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.711759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.712176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.712206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-15 11:10:15.712557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-15 11:10:15.712599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.712838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.712873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.713181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.713209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.713537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.713575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.713990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.714020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.714387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.714416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.714842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.714873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.715273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.715301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.715605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.715634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.716017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.716047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.716408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.716437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.716794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.716823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.717166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.717195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.717584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.717614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.717951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.717981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.718339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.718368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.718635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.718665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.719061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.719091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.719453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.719482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.719819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.719851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.720217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.720246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.720508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.720536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.720890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.720920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.721286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.721315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.721686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.721716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.721965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.721997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.722242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.722271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.722546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.722592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.722965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.722996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.723355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.723385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.723746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.723777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.724029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.724058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.724364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.724392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.724734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.724765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.725028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.725057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.725411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-15 11:10:15.725440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-15 11:10:15.725819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.725850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.726224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.726252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.726602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.726632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.727058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.727087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.727456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.727485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.727829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.727866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.728234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.728263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.728644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.728674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.729041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.729070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.729433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.729463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.729815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.729845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.730244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.730274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.730607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.730637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.730915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.730944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.731276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.731308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.731650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.731681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.732120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.732149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.732507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.732536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.732918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.732948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.733370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.733655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.733685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.733962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.733991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.734350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.734378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.734745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.734775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.735034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.735064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.735467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.735496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.735844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.735877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.736247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.736276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.736675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.736706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.737091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.737120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.737484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.737513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.737933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.737963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.738323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.738353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.738714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.738743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-15 11:10:15.739111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-15 11:10:15.739140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.739516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.739546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.739939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.739969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.740333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.740363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.740718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.740748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.741107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.741136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.741495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.741525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.741898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.741928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.742185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.742214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.742349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.742377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.742710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.742740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.743111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.743145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.743377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.743406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.743658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.743687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.744060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.744088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.744464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.744495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.744749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.744779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.745143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.745172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.745534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.745571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.745939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.745968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.746352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.746381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.746727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.746758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.747141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.747171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.747535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.747578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.747929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.747958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.748333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.748607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.748637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.749040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.749070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.749422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.749452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.749684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.749714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.750071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.750101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.750466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.750495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.750877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-15 11:10:15.750907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-15 11:10:15.751260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.751290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.751731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.751761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.752077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.752115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.752445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.752474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.752757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.752795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.753048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.753080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.753453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.753482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.753882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.754252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.754283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.754624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.754655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.755035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.755064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.755429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.755458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.755814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.755843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.756183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.756212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.756586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.756617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.756981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.757010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.757393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.757422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.757784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.757815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.758189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.758224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.758466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.758495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.758759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.758793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.759122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.759151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.759522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.759552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.759934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.759963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.760328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.760358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.760730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.760761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.761036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.761065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.761419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.761448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.761821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.761851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.762210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.762239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.762604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.762634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.763028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.763058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.763416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.763446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-15 11:10:15.763799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-15 11:10:15.763829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.764047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.764077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.764435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.764464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.764831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.764864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.765214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.765243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.765598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.765628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.766028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.766058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.766435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.766464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.766809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.766840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.767199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.767229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.767609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.767640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.767995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.768025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.768250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.768283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.768716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.768747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.769105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.769135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.769590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.769620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.769891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.769920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.770274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.770303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.770666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.770696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.771077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.771107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.771481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.771510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.771863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.772263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.772293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.772558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.772601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.772954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.772984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.773227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.773256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.773637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.773685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.773972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.774001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-15 11:10:15.774350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-15 11:10:15.774379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-15 11:10:15.774806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-15 11:10:15.774836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-15 11:10:15.775081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-15 11:10:15.775110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-15 11:10:15.775467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-15 11:10:15.775497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.775837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.775871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.776239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.776272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.776635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.776665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.776892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.776921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.777280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.777309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.777757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.777788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.778132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.778164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.778512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.778543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.778905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.778937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.779313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.779342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.779694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.779724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.780097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.780128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.780493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.780522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.780781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.780812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.781183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.781212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.781585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.781615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.781981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.782010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.782385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-15 11:10:15.782415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-15 11:10:15.782783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.782815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.783175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.783204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.783467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.783503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.783873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.783904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.784294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.784324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.784680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.784711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.784956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.784985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.785368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.785397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.785772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.785802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.786163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.786193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.786546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.786587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.786866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.786896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.787032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.787060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.787443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.787472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.787818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.787850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.787997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.788027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.788378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.788407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.788822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.788853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.789101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.789135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.789414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.789446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.789795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.789828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.790215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.790244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.790691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.790722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.791082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.791111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.791481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.791509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.791873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.791905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.792272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.792303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.792641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.792672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.793013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.793042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.793414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.793442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.793830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.793860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.794306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.794336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.794705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.794737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.795131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.795160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.795381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.795414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-15 11:10:15.795803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-15 11:10:15.795833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.796055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.796084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.796479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.796508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.796759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.796790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.797006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.797035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.797399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.797427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.797796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.797828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.798190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.798226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.798467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.798498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.798877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.798908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.799271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.799300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.799667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.799699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.800048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.800077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.800353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.800381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.800734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.800764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.801134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.801165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.801525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.801556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.801930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.801960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.802344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.802373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.802734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.802765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.803127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.803157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.803516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.803545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.803928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.803957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.804322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.804350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.804719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.804749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.805108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.805137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.805552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.805602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.805980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-15 11:10:15.806009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-15 11:10:15.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.806467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.806848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.806880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.807233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.807263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.807626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.807656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.808021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.808051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.808412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.808441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.808680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.808713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.809069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.809099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.809459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.809488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.809823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.809853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.810216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.810246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.810489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.810518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.810877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.810908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.811322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.811352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.811704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.811734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.812096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.812126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.812484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.812512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.812872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.812902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.813263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.813292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.813655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.813691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.814067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.814098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.814387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.814416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.814739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.814768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.815173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.815575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.815960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.816207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.816235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.816463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.816493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.816855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.816886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.817256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.817285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.817640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.817670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.818030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.818059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.818419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.818448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.818805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.818837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-15 11:10:15.819186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-15 11:10:15.819216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.819558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.819597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.819924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.819953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.820304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.820334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.820694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.820726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.821078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.821106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.821348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.821379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.821669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.822034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.822063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.822428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.822457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.822824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.822854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.823213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.823242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.823611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.823641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.823994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.824023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.824364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.824393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.824754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.825046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.825075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.825426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.825455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.825818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.825850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.826222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.826251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.826500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.826529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.826903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.826934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.827299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.827329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.827691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.827721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.828079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.828109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.828469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.828513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.828898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.828929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.829282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.829311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.829680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.829711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.830074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.830103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.830470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.830498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.830864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.830894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.831131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-15 11:10:15.831163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-15 11:10:15.831536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.831573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.831927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.831956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.832314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.832343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.832698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.832728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.833092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.833121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.833479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.833508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.833886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.833918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.834266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.834296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.834648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.834679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.835011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.835040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.835376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.835405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.835762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.835792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.836162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.836192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.836546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.836583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.836940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.836970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.837335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.837365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.837618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.837650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.838037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.838067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.838440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.838469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.838735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.838766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.839132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.839162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.839526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.839556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.839940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.839969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.840317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.840346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.840706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.840735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.841099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.841129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.841490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.841519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.841798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.841831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-15 11:10:15.842271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-15 11:10:15.842301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.842631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.842663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.843056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.843085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.843468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.843498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.843914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.843952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.844181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.844213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.844593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.844624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.844977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.845006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.845254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.845282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.845637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.845668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.846028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.846057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.846427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.846455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.846812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.846843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.847202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.847231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.847590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.847619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.847967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.847996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.848352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.848381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.848768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.849135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.849164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.849520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.849549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.849913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.849943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.850294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.850323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.850674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.850706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.850983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.851012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.851233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.851265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.851623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.851654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.851930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.851960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.852301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.852331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.852599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.852630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.852972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.853001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.853252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.853284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.853674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.853706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.854064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.854094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.854473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-15 11:10:15.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-15 11:10:15.854866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.854897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.855259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.855287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.855662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.855692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.856053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.856083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.856447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.856476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.856826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.856856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.857040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.857069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.857435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.857464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.857812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.857843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.858162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.858191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.858464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.858500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.858895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.859282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.859310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.859673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.859702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.860047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.860076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.860436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.860465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.860847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.860879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.861052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.861082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.861442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.861472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.861817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.861848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.862182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.862212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.862581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.862613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.862979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.863008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.863376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.863406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.863769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.863799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.864144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.864173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.864531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.864560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.864802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.864833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.865217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.865246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.865618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.865649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.866051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.866414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.866443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.866819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.866849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.867214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.867243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.867596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.867627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.868009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-15 11:10:15.868038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-15 11:10:15.868376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.868405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.868808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.868839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.869194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.869224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.869598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.869629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.870083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.870113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.870472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.870887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.870917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.871257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.871287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.871655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.871686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.871954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.871982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.872326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.872355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.872730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.872761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.873026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.873055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.873432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.873461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.873803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.873840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.874187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.874216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.874588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.874619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.875002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.875373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.875401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.875772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.875804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.876166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.876196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.876558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.876596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.876954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.876984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.877383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.877722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.877760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.878124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.878152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.878534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.878573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.878929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.878959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.879320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.879350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.879713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.879744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.880111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.880140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.880494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.880523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.880970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.881000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-15 11:10:15.881428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-15 11:10:15.881457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.881807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.881837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.882199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.882228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.882589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.882619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.883006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.883035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.883274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.883306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.883669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.883700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.884066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.884095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.884460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.884489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.884833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.884863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.885215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.885244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.885603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.885633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.885947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.885977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.886341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.886370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.886728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.886759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.887119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.887148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.887453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.887482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.887855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.887886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.888268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.888297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.888599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.888630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.888972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.889002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.889364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.889398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.889752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.889783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.890131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.890160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.890415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.890444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.890800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.890831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.891193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.891223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.891591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.891622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.891983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.892012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.892380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.892409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.892859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.892889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.893275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.893304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.893667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.893697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.894036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.894066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-15 11:10:15.894419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-15 11:10:15.894448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.894787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.894818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.895062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.895091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.895458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.895487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.895867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.895897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.896254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.896283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.896624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.896654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.897101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.897130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.897464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.897493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.898217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.898247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.898595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.898625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.898980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.899009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.899435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.899464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.899699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.899733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.900090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.900120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.900479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.900508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.900883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.900920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.901244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.901273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.901627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.901658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.902108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.902138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.902473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.902502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.902859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.902890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.903267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.903667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.903697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-15 11:10:15.903970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-15 11:10:15.903999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.904330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.904361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.904729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.904766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.905120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.905150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.905517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.905546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.905888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.905918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.906274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.906302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.906598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.907005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.907035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.907403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.907432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.907813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.907844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.908094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.908126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.908506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.908535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.908897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.908928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.909293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.909322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.909559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.909597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.909972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.910001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.910329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.910358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.910717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.910747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.911110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.911139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.911503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.911532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.911905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.911935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.912297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.912326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.912675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.913068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.913096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.913458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.913488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.913833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.913863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.914219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.914249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.914608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.914639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.915003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.915032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.915268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.915684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.915714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.916077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.916108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.916534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.916573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-15 11:10:15.916886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-15 11:10:15.916916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.917304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.917663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.917694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.918064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.918093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.918460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.918489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.918849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.919135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.919167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.919523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.919553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.919983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.920020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.920359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.920389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.920727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.920757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.921122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.921151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.921597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.921629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.921982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.922011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.922385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.922414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.922673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.922702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.923052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.923081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.923461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.923490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.923848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.923878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.924152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.924181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.924477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.924506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.924757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.924787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.925196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.925227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.925584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.925615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.925967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.926340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.926369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.926769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.926799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.927153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.927183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.927521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.927550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.927916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.927946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.928306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.928335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.928715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.928746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.929005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.929033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.929379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.929408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.929806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-15 11:10:15.930163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-15 11:10:15.930192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.930548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.930598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.930977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.931006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.931371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.931400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.931776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.931806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.932164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.932193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.932573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.932926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.932956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.933329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.933358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.933696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.933728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.934093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.934122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.934485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.934514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.934886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.934918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.935287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.935321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.935684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.935715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.936094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.936123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.936488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.936517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.936875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.936904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.937321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.937351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.937634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.937663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.938038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.938067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.938467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.938498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.938837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.938868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.939211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.939241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.939600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.939630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.939979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.940008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.940273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.940302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.940682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.940712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.941075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.941105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.941464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.941493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.941861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.941891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.942266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.942296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.942552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.942597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.942940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.942969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.943337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-15 11:10:15.943366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-15 11:10:15.943726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.943756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.944115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.944144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.944511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.944539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.944919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.944949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.945310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.945339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.945703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.945734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.946148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.946177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.946545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.946589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.946907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.946937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.947311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.947340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.947712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.947742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.948146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.948175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.948526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.948556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.948934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.948963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.949327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.949356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.949729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.949760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.950124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.950153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.950518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.950547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.950931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.950968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.951308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.951337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.951713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.951743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.952099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.952128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.952498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.952528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.952887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.952917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.953292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.953321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.953677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.953708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.954064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.954093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.954350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.954379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.954735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.954767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.955165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.955194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.955547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.955586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.955954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.955985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-15 11:10:15.956354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-15 11:10:15.956383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.956628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.956660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.956920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.956950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.957329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.957358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.957718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.957750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.958115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.958145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.958515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.958545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.958697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.958726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.959099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.959128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.959488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.959517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.959877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.959909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.960325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.960353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.960717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.960747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.961092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.961122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.961491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.961520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.961888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.961919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.962283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.962314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.962716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.962747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.963105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.963134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.963498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.963528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.963884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.963915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.964321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.964350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.964720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.964751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.965120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.965149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.965503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.965532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.965929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.965959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.966248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.966278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.966638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.966670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.967036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.967068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.967416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.967446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.967852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.967882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.968151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-15 11:10:15.968180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-15 11:10:15.968533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.968574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.968980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.969010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.969352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.969387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.969667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.969697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.970035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.970068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.970315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.970345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.970691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.970730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.971105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.971135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.971497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.971527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.971883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.971914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.972277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.972306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.972555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.972596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.972871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.972901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.973270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.973299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.973670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.973699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.974111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.974141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.974383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.974416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.974674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.974705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.975089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.975118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.975485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.975516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.975873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.975905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.976263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.976301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.976666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.976697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.977051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.977082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.977447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.977476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.977841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.977872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.978232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.978609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.978641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-15 11:10:15.978999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-15 11:10:15.979029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.979390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.979419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.979862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.979892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.980238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.980269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.980635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.980665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.980875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.980907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.981285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.981314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.981686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.981718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.981980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.982010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.982364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.982394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.982721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.982754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.983174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.983204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.983595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.983627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.983987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.984017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.984357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.984386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.984727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.984761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.985130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.985160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.985530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.985559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.985949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.985986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.986227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.986257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.986645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.986676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.987039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.987070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.987359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.987389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.987747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.987778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.988188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.988570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.988600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.988936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.988977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.989347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.989378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.989734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.989764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.990120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.990151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.990498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.990528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.990929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.990960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.991317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.991348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.991594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.991634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.992043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-15 11:10:15.992073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-15 11:10:15.992414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.992444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.992824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.992855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.993156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.993185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.993574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.993605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.993855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.993888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.994268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.994299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.994659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.994691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.995066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.995096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.995510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.995540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.995994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.996025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.996404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.996435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.996773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.996805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.997089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.997120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.997478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.997508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.997906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.997939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.998297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.998328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.998695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.998726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.999107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.999136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.999500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.999529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:15.999905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:15.999937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.000295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.000326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.000686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.000719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.001015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.001045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.001396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.001426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.001821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.001852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.002220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.002250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.002621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.002652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.003013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.003045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.003412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.003441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.003810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.003842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.004199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.004231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.004590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.004621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.005074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.005104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.005456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-15 11:10:16.005487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-15 11:10:16.005838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.005869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.006224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.006255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.006431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.006464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.006859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.006891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.007256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.007292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.007638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.007671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.007936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.007968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.008351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.008382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.008708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.008739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.009081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.009110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.009492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.009522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.009884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.009915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.010315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.010345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.010702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.010733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.011104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.011134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.011496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.011524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.011936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.011966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.012355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.012385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.012747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.012779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.013144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.013174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.013540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.013584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.013961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.013991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.014346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.014377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.014741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.014771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.015143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.015172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.015514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.015544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.015914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.015946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.016283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.016314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.016687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.016718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.017057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.017087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.017338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.017367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.018057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.018087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.018460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.018491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-15 11:10:16.018841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-15 11:10:16.018872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.019238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.019269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.019631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.019663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.020026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.020058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.020397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.020427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.020739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.020771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.021002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.021034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.021396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.021427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.021795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.021825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.022193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.022224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.022589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.022626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.023028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.023058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.023413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.023445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.023687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.023720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.024141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.024412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.024443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.024715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.024747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.025005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.025039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.025411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.025442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.025816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.025847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.026181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.026211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.026504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.026535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.026783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.026816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.027182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.027212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.027556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.027600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.028013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.028044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.028406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.028437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.028802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.028834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.029176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.029206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.029458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.029489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.029860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.029891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-15 11:10:16.030293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-15 11:10:16.030324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.030683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.030714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.031060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.031090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.031456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.031487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.031704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.031738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.031977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.032007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.032374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.032405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.032739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.032773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.033035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.033066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.033450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.033480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.033818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.033850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.034250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.034283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.034684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.034716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.035070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.035100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.035359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.035390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.035734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.035764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.036127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.036157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.036522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.036551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.036937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.036968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.037339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.037376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.037813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.037847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.579 qpair failed and we were unable to recover it. 00:29:56.579 [2024-11-15 11:10:16.038176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.579 [2024-11-15 11:10:16.038204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.038555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.038597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.038944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.038975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.039339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.039369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.039626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.039656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.040032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.040061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.040444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.040474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.040814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.040847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.041220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.041252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.041380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.041409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.041783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.041816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.042145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.042175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.042538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.042579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.042935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.042964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.043320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.043350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.043756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.043787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.044139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.044170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.044536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.044588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.044975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.045006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.045444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.045717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.045754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.046123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.046157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.046538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.046958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.046989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.047344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.047377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.047739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.047771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.048145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.048175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.048533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.048580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.048800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.048831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.049058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.049091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.049442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.049472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.049844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.049878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.050232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.050265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.050636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.050667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.580 [2024-11-15 11:10:16.051048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.580 [2024-11-15 11:10:16.051078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.580 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.051446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.051479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.051630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.051661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.051911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.051942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.052080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.052116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.052350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.052392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.052665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.052699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.053089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.053119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.053374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.053408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.053704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.053736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.054095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.054125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.054530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.054560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.054809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.054839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.055194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.055224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.055447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.055476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.055871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.055902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.056259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.056290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.056653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.056683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.057116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.057146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.057489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.057520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.057875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.057905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.058156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.058185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.058525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.058554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.058925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.058956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.059299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.059328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.059688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.059719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.060097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.060126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.060506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.060866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.060896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.061255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.061286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.061638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.061669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-15 11:10:16.062041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-15 11:10:16.062071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.062442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.062471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.062857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.062887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.063249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.063277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.063649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.063680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.064053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.064082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.064436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.064466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.064806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.064836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.065191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.065219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.065581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.065610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.065979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.066009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.066376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.066405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.066786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.066815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.067187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.067222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.067584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.067615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.067874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.067903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.068257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.068286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.068664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.068697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.069069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.069098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.069348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.069376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.069635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.069666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.069974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.070003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.070357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.070750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.070780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.071140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.071169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.071533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.071561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-15 11:10:16.071844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-15 11:10:16.071876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.072234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.072623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.072655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.073015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.073045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.073397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.073427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.073802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.073833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.074201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.074230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.074592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.074623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.074989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.075018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.075376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.075408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.075767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.075797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.076054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.076084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.076468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.076815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.076846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.077232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-15 11:10:16.077263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-15 11:10:16.077601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.077631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.078004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.078367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.078396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.078762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.078791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.079161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.079190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.079548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.079588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.079965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.079994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.080364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.080393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.080755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.080786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.081143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.081172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.081339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.081371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.081752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.081782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.082151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.082188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.082533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.082571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.082916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.082945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.083304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.083333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.083703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.083733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.084123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.084152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.084404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.084437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.084824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.084856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.085192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.085223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.085595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.085627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.086042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.086072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.086305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.086339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.086691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.086723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.087091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.087123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.087478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.087508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.087880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.087910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.088278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.088309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.088681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-15 11:10:16.088712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-15 11:10:16.089079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.089108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.089469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.089500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.089858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.089893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.090244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.090273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.090645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.090676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.091038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.091067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.091432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.091463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.091822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.091852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.092226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.092254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.092510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.092543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.092794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.092826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.093084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.093113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.093526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.093893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.093925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.094266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.094296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.094555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.094603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.094880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.094910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.095252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.095282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.095655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.095687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.096046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.096076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.096478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.096508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.096917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.096948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.097324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.097360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.097727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.097758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.098145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.098174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.098533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.098580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.098940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.098970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.099331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.099360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.099730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.099762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-15 11:10:16.100090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-15 11:10:16.100119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.100366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.100395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.100814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.100844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.101188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.101219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.101595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.101626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.101998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.102028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.102383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.102414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.102790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.102821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.103159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.103189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.103556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.103600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.103982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.104012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.104378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.104406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.104777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.104807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.105173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.105202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.105643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.105674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.106043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.106071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.106432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.106462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.106809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.106840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.107172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.107202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.107556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.107597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.108001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.108032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.108394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.108423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.108788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.108819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.109184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.109213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.109582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.109612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.109973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.110003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.110354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.110385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.110748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.110779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.111046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.111075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.111404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.111434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.111803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.111834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.112200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.112229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.112598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.112629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-15 11:10:16.113020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-15 11:10:16.113056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.113293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.113326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.113668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.113699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.114046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.114076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.114436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.114466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.114830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.114862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.115212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.115242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.115606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.115637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.116001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.116360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.116388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.116738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.116770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.117166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.117194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.117557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.117598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.117957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.117986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.118394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.118424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.118793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.118823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.119195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.119226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.119595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.119626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.119987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.120017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.120380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.120409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.120836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.120867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.121108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.121137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.121486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.121515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.121896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.121927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.122292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.122323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.122694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.122726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.123099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.123130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.123497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.123529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.123767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.123797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.124185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.124214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.124585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.124617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-15 11:10:16.124984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-15 11:10:16.125012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.125358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.125387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.125732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.125764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.126129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.126158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.126456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.126485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.126843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.126873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.127231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.127262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.127626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.127659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.128032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.128064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.128437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.128474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.128818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.128850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.129204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.129234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.129592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.129623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.130008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.130037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.130414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.130443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.130795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.130826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.131178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.131207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.131593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.131624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.131978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.132008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.132401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.132431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.132803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.132834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.133191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.133221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.133593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.133624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.133990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.134020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.134375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.134407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.134747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.134777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.135172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.135204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.135546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.135603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.135896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.136247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.136276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.136621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.136652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-15 11:10:16.137027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-15 11:10:16.137057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.137418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.137448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.137789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.137820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.138164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.138195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.138546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.138585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.138953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.138984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.139345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.139373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.139730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.139761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.140127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.140158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.140520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.140551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.140926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.140958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.141323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.141356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.141716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.141748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.142184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.142213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.142538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.142578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.142911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.142940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.143312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.143342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.143676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.143706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.144073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.144108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.144461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.144490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.144855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.144885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.145251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.145282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.145632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.145664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.146002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.146032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.146393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-15 11:10:16.146422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-15 11:10:16.146791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.146820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.147195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.147225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.147592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.147623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.147995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.148024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.148395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.148427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.148797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.148827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.149086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.149116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.149579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.149611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.149973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.150002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.150366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.150395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.150759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.150790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.151149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.151179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.151555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.151594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.151949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.151978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.152312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.152342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.152690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.152721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.153107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.153137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.153496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.153526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.153902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.153933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.154295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.154325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.154740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.154771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.155142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.155173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.155536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.155576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.155833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.155861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.156233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.156263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.156621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.156652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.157027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.157056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.157438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.157467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.157804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.157835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.158199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.158229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.158595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.158626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.158978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.159006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.159385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.159415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.159762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.159794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.160150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-15 11:10:16.160180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-15 11:10:16.160531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.160570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.160911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.160942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.161308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.161339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.161695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.161725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.162094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.162123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.162494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.162522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.162873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.162903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.163276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.163306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.163676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.163707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.163952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.163984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.164346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.164376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.164712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.164744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.165112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.165141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.165501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.165530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.165854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.165885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.166159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.166189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.166539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.166579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.167013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.167042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.167401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.167430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.167804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.167835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.168157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.168186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.168551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.168593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.168940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.168968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.169328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.169357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.169621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.169651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.170047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.170083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.170419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.170450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.170810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.170840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.171219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.171248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.171619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.171652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.172050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.172079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.172482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.172511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.172743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.172773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.173133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.173162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.173522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.173551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.173902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-15 11:10:16.173932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-15 11:10:16.174282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.174311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.174625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.174655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.175006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.175036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.175409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.175438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.175798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.175829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.176206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.176234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.176583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.176862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.176891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.177260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.177289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.177649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.177679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.177938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.177968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.178332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.178722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.178753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.179114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.179144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.179503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.179532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.179921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.179952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.180319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.180348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.180698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.180729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.181119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.181555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.181597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.181952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.181988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.182353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.182382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.182718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.182750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.183079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.183108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.183474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.183503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.183850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.183879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.184236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.184265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.184503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.184534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.184819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.184850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.185213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.185249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.185606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.185637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.186001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.186030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.186393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.186421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-15 11:10:16.186691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-15 11:10:16.186722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.187095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.187124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.187480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.187509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.187878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.187908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.188376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.188764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.189098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.189127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.189483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.189512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.189867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.189897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.190260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.190289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.190661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.190692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.191042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.191418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.191448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.191719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.191748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.192107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.192137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.192477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.192508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.192773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.192805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.193184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.193215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.193379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.193408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.193766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.193796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.194171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.194200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.194556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.194610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.194947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.194976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.195332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.195361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.195723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.195754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.196111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.196139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.196446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.196475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.196821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.196851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.197188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.197219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.197570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.197600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.197989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.198019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.198443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.198472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.198830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.198862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.199108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.199137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.199380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-15 11:10:16.199409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-15 11:10:16.199740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.199771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.200139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.200174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.200529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.200559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.200931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.200961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.201325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.201355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.201717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.201748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.202112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.202141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.202500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.202528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.202972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.203003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.203346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.203375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.203728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.203759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.204114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.204144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.204504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.204533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.204927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.205282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.205311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.205683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.205713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.206093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.206125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.206487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.206517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.206901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.206931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.207295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.207323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.207686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.207717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.208077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.208106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.208354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.208383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.208790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.208820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.209179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.209210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.209587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.209618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.209867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.209895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.210322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.210351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-15 11:10:16.210729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-15 11:10:16.210761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.211129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.211159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.211530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.211560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.211918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.211948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.212350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.212379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.212739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.212769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.213128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.213158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.213490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.213519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.213782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.213812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.214195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.214223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.214636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.214672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.215068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.215098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.215463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.215495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.215861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.215900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.216310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.216341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.216700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.216732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.217098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.217128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.217492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.217522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.217978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.218009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.218375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.218404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.218760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.218790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.219062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.219093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.219462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.219493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.219862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.219893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.220262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.220294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.220656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.220689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.221075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.221106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.221455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.221486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.221826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.221856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-15 11:10:16.222216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-15 11:10:16.222246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.222611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.222641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.223021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.223052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.223416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.223445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.223909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.224064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.224091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.224334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.224363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.224745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.224775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.225152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.225181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.225445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.225477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.225737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.225771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.226128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.226158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.226459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.226488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.226864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.226896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.227254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.227284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.227527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.227557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.227967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.227998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.228364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.228395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.228747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.228777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.229137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.229167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.229359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.229389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.229615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.229645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.230064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.230420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.230449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.230758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.230801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.231064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.231094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.231452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.231483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.231731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.231762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.232181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.232212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.232586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.232618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.232914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.232944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.233284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.233322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.233695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.233726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.234085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.234116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-15 11:10:16.234368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-15 11:10:16.234398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.234821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.234852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.235220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.235249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.235608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.235639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.235983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.236015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.236362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.236391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.236759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.236790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.237163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.237191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.237587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.237618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.237999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.238028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.238439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.238801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.238833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.239207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.239237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.239594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.239626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.239982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.240011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.240365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.240395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.240743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.240773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.241025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.241056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.241425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.241455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.241869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.241901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.242259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.242289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.242683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.242713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.243107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.243136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.243496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.243526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.243942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.243974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.244174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.244207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.244455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.244486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.244886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.244918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.245282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.245313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.245685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.245715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.246120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.246157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.246378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.246408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.246787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.246818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.247173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-15 11:10:16.247204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-15 11:10:16.247627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.247658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.248002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.248033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.248449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.248480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.248873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.248904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.249245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.249276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.249672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.249712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.250095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.250125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.250375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.250404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.250787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.250819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.251097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.251126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.251519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.251549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.251951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.252208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.252238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.252485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.252514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.252892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.252923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.253269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.253299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.253678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.253710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.254069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.254099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.254464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.254493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.254761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.254791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.255146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.255175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.255412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.255442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.255833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.255864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.256221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.256253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.256625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.256655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.257035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.257064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.257420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.257450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.257685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.257717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.258090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.258122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.258528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.258558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.258991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.259021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.259375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.259405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.259767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.259798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.260159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.260190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.260556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-15 11:10:16.260598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-15 11:10:16.260856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.260886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.261220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.261255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.261595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.261626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.261973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.262005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.262342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.262373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.262708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.262738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.263131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.263160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.263525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.263555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.263965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.263994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.264380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.264411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.264773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.264804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.265167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.265197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.265576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.265608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.266011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.266043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.266261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.266291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.266672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.266705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.266959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.266991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.267339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.267369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.267738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.267768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.268127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.268157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.268390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.268418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.268711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.268741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.269125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.269157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.269455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.269484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.269842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.269873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.270252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.270282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.270644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.270675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.270922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.871 [2024-11-15 11:10:16.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.871 qpair failed and we were unable to recover it. 00:29:56.871 [2024-11-15 11:10:16.271195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.271225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.271516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.271547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.271986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.272017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.272377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.272405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.272788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.272818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.273183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.273214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.273620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.273652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.273899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.274174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.274204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.274556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.274601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.274945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.274976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.275227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.275260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.275659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.275690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.276021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.276057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.276421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.276450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.276709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.276740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.277113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.277143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.277453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.277483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.277720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.277750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.278121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.278152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.278498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.278528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.278897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.278928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.279332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.279361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.279726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.279757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.280128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.280158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.280523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.280552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.280931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.280961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.281359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.281389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.281749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.281779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.282132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.282163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.282494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.282525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.282955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.282985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.283320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-15 11:10:16.283348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-15 11:10:16.283687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.283718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.284082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.284113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.284474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.284504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.284855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.284886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.285254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.285284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.285591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.285621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.285985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.286014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.286387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.286417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.286753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.286782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.287145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.287175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.287586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.287618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.287985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.288015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.288363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.288392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.288752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.288783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.289135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.289167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.289524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.289553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.289801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.289830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.290213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.290243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.290600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.290631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.291007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.291036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.291384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.291422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.291798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.291829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.292170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.292200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.292571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.292603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.293000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.293029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.293393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.293423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.293789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.293820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.294202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.294232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.294600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.294632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.295012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.295041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.295407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.295436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.295807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.295837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.296184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.296214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.296599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-15 11:10:16.296631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-15 11:10:16.296952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.296981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.297349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.297378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.297748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.297779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.298128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.298158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.298522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.298552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.298890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.298919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.299278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.299306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.299680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.299711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.300057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.300086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.300441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.300472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.300708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.300740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.301122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.301151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.301517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.301547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.301917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.301949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.302284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.302314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.302668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.302700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.302980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.303008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.303401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.303431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.303682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.303712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.304078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.304107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.304468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.304497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.304855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.304886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.305259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.305289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.305546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.305590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.305955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.305984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.306340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.306370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.306719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.306761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.307152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.307183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.307546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.307588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.307979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.308008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-15 11:10:16.308442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-15 11:10:16.308471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.308815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.308847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.309221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.309252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.309609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.309639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.310031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.310061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.310436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.310465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.310803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.310833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.311200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.311230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.311601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.311990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.312020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.312376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.312406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.312747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.312778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.313043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.313072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.313441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.313471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.313836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.313867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.314208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.314238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.314487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.314516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.314898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.314928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.315172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.315201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.315575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.315607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.315970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.315999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.316357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.316387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.316735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.316766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.317014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.317043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.317414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.317443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.317790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.317823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.318187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.318218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.318559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.318603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.318962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.318992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.319355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.319384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.319747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.319777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.320185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-15 11:10:16.320214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-15 11:10:16.320591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.320621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.320984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.321014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.321355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.321383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.321750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.321781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.322141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.322176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.322533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.322576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.322821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.322851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.323216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.323245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.323597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.323629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.323985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.324014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.324274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.324303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.324715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.324745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.325123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.325152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.325517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.325546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.325913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.325942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.326322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.326351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.326715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.326746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.327113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.327142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.327513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.327544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.327922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.327953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.328321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.328350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.328763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.329119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.329150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.329490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.329519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.329728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.329758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.329959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.329992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.330336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.330366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.330634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.330664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.331051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.331080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.331436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-15 11:10:16.331465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-15 11:10:16.331821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.331851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.332195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.332226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.332474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.332506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.332915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.332947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.333284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.333316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.333654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.333684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.334079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.334108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.334451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.334482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.334828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.334858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.335259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.335288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.335648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.335678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.336039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.336069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.336453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.336483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.336941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.336972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.337313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.337351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.337702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.337734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.338034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.338063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.338418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.338448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.338790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.338820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.339168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.339197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.339557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.339607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.339963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.339993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.340368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.340396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.340752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.340781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.341194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.341224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.341583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.341615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.341970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.341999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.342261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.342290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.342642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.342673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.343049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.343079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.343439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.343469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.343839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.343871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.344129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.344158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.344507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-15 11:10:16.344536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-15 11:10:16.344996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.345026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.345381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.345412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.345790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.345828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.346162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.346191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.346427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.346456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.346638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.346669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.347034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.347063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.347432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.347463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.347825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.347858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.348224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.348254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.348635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.348666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.349097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.349126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.349492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.349522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.349897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.349927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.350280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.350309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.350681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.350712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.351039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.351076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.351437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.351466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.351813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.351845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.352204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.352236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.352491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.352526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.352919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.352951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.353289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.353327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.353699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.353729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.354089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.354118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.354475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.354504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.354889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.355264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.355294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.355632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.355663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.356069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.356100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-15 11:10:16.356421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-11-15 11:10:16.356450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.356793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.356825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.357188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.357219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.357583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.357615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.357982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.358011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.358384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.358412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.358792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.358822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.359190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.359220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.359591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.359622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.359984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.360013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.360373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.360401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.360760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.360791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.361157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.361187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.361569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.361600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.362039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.362069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.362421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.362450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.362799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.362829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.363194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.363230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.363602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.363635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.363982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.364011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.364384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.364413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.364789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.364827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.365190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.365219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.365584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.365616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.365971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.365999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.366365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.366394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.366768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.366798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.367127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.367157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.367518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.367549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.367927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.367958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.368375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.368404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.368761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.368793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.369183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.369546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.369588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.369944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-11-15 11:10:16.369974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-11-15 11:10:16.370220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-11-15 11:10:16.370249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-11-15 11:10:16.370597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-11-15 11:10:16.370627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-11-15 11:10:16.370986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-11-15 11:10:16.371015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-11-15 11:10:16.371264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-11-15 11:10:16.371294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-11-15 11:10:16.371526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-11-15 11:10:16.371558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-15 11:10:16.371845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.371877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.372229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.372262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.372526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.372871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.372901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.373263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.373294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.373654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.373685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.374035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.374065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.374439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.374468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.374763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.374795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.375182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.375211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.375600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.375632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.376009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.376039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.376389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.376419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.376763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.376793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.377122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.377151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.377485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.377514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.377949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.377979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.378347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.378384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.378791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.378823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.379165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.379195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.379587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.379617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.379996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.380025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.380396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.380425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.380748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.380778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.381107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.381136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.381491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.381520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.381890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.381919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.382177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.382208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.382599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.382631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.382973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.383005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.383379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.383409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.383773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.383804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.384164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.384193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.384586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.384616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-15 11:10:16.384974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-15 11:10:16.385004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.385370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.385400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.385749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.385780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.386136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.386167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.386524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.386553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.386907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.386936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.387296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.387325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.387688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.387719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.387972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.388001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.388352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.388381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.388726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.388758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.389207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.389236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.389592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.389623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.389983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.390013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.390386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.390414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.390779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.390809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.391175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.391204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.391583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.391615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.391980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.392011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.392367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.392395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.392638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.392669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.393053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.393082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.393521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.393550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.393915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.393951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.394312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.394343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.394703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.394733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.395097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.395125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.395385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.395413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.395779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.395810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.396178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.396206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.396583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.396614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.397000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.397031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.397424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.397872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.397903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-15 11:10:16.398276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-15 11:10:16.398308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.398663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.398694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.399061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.399091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.399450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.399479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.399736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.399766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.400111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.400140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.400501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.400530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.400901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.400931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.401295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.401324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.401667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.401697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.402095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.402124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.402486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.402515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.403032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.403065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.403398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.403427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.403676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.403707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.404012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.404287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.404317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.404681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.404713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.405084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.405115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.405482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.405511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.405877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.405908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.406154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.406182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.406525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.406555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.406931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.406960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.407325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.407356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.407723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.407755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.407918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.407948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.408324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.408352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.408714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.408745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.409103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.409137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.409488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.409518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.409856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-15 11:10:16.409888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-15 11:10:16.410249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.410280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.410648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.410678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.411029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.411060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.411392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.411420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.411744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.411775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.412150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.412545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.412585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.412937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.412966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.413338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.413366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.413734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.413764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.414208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.414238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.414611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.414641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.415006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.415035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.415397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.415426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.415804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.415842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.416097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.416127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.416474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.416503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.416859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.416889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.417248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.417277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.417633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.417663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.418088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.418117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.418451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.418481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.418747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.418778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.419152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.419180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.419558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.419602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.419940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.419971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.420344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.420373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.420740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.420771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.421125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.421155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.421494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.421523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.421960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.421990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.422329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.422360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.422727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.422757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.423113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.423142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-15 11:10:16.423494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-15 11:10:16.423523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.423890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.423921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.424277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.424307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.424667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.424704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.424969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.424999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.425340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.425372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.425750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.425782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.426154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.426184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.426542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.426584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.426978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.427009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.427258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.427290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.427642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.427673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.428033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.428066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.428405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.428435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.428790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.428821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.429170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.429199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.429577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.429607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.429961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.429990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.430423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.430454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.430692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.430722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.431069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.431098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.431460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.431489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.431853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.431883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.432242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.432273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.432641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.432673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.433027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.433057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.433391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.433420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.433789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.433819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.434160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.434189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.434432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-15 11:10:16.434466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-15 11:10:16.434851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.434885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.435245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.435274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.435638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.435668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.436042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.436072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.436396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.436425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.436769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.436800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.437165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.437194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.437549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.437589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.437933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.437962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.438335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.438364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.438604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.438637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.439030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.439395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.439425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.439798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.439837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.440200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.440230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.440485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.440515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.440886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.440918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.441184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.441214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.441576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.441608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.441941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.441970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.442326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.442355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.442712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.442743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.443103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.443133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.443503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.443532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.443928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.444317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.444684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.444715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.445079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-15 11:10:16.445107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-15 11:10:16.445465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.445494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.445859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.446137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.446169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.446519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.446550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.446924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.446955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.447306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.447334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.447683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.447714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.448081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.448111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.448539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.448580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.448944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.448973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.449338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.449367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.449726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.449758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.450111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.450141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.450517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.450548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.450912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.450944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.451290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.451320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.451682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.451713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.452071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.452100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.452354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.452384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.452720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.452752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.453101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.453130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.453540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.453586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.453941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.453970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.454337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.454366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.454622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.454653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.454995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.455032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.455387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.455416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.455779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.455810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.456244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.456274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-15 11:10:16.456627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-15 11:10:16.456657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.456931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.456959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.457214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.457244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.457696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.457727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.458111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.458140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.458485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.458514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.458918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.458948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.459293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.459331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.459586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.459618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.459963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.459992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.460355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.460385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.460746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.460776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.461135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.461164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.461527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.461557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.461929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.461958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.462334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.462364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.462722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.462753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.463183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.463213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.463584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.463967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.463996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.464360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.464390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.464759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.464790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.465052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.465082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.465531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.465576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.465976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.466007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.466361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.466392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.466653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.466685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.467058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.467088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.467451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.467481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.467809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.467840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.468200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.468231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-15 11:10:16.468586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-15 11:10:16.468618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.468993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.469024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.469249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.469280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.469662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.469694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.470060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.470092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.470444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.470483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.470821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.470853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.471214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.471245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.471598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.471629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.471888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.471919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.472276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.472306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.472678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.472709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.473067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.473099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.473331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.473366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.473592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.473989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.474021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.474407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.474647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.474680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.475030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.475061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.475419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.475821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.475853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.476212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.476242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.476619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.476650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.477030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.477063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.477418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.477450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.477786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.478198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.478229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.478605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.478639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.478996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.479027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.479386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.479418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.479792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.479823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.480066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.480098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.480470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.480501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.480888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.480920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.481277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.481308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.481654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.481685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.482102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.482134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.482493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-15 11:10:16.482524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-15 11:10:16.482902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.482934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.483287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.483319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.483669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.483701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.484059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.484090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.484419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.484451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.484801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.484833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.485085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.485116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.485480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.485516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.485962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.485993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.486229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.486261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.486620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.486651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.487110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.487139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.487395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.487424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.487755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.487787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.488166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.488196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.488597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.488628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.488996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.489025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.489371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.489399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.489817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.489846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.490098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.490130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.490485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.490515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.490957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.490988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.491352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.491382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.491641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.491671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.491837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.491865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.492214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.492242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.492501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.492531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.492816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.492847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.493161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.165 [2024-11-15 11:10:16.493190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.165 qpair failed and we were unable to recover it. 00:29:57.165 [2024-11-15 11:10:16.493553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.493594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.493861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.493891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.494254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.494283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.494654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.494684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.495058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.495087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.495338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.495370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.495719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.495751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.496099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.496129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.496471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.496501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.496886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.496916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.497285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.497315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.497692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.497724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.498075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.498104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.498377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.498407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.498768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.498799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.499160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.499189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.499552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.499857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.499887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.500256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.500291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.500658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.501072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.501102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.501464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.501494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.501893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.501924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.502296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.502327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.502694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.502726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.503178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.503208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.503541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.503599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.504019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.504048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.504459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.504489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.504744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.166 [2024-11-15 11:10:16.504775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.166 qpair failed and we were unable to recover it. 00:29:57.166 [2024-11-15 11:10:16.505153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.505184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.505560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.505604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.506012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.506042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.506411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.506442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.506797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.506827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.507194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.507223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.507583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.507614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.507973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.508003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.508370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.508399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.508777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.508808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.509149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.509180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.509543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.509585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.510010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.510039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.510377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.510408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.510776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.510807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.511171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.511202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.511578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.511609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.511967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.511996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.512366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.512395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.512743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.512774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.513145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.513174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.513539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.513580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.513998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.514028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.514399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.514428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.514783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.514814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.515171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.515200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.515606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.515636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.515992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.516023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.516378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.516414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.516778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.516811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.517045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.517075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-11-15 11:10:16.517365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-11-15 11:10:16.517395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.517743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.517774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.518026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.518057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.518408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.518438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.518796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.518826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.519206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.519236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.519596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.519626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.519986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.520015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.520376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.520408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.520779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.520810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.521194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.521223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.521520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.521550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.521917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.521947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.522318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.522350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.522712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.522743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.523004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.523033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.523408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.523437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.523704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.523733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.524133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.524162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.524525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.524555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.524812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.524844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.525098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.525127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.525468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.525853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.525885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.526124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.526154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.526405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.526437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.526673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.527104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.527134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.527488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.527519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.527911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.527942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.528300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.528329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.528681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.528712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.529122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.529153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.529517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.529547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-15 11:10:16.529922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-11-15 11:10:16.529954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.530294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.530330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.530666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.530698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.531052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.531088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.531457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.531488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.531852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.531883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.532133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.532162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.532526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.532556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.532924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.532955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.533219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.533249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.533608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.533639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.534110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.534139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.534486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.534517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.534920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.534950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.535287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.535317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.535676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.535708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.536070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.536099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.536477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.536507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.536910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.536940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.537258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.537289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.537671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.537701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.538077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.538107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.538464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.538495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.538842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.538872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.539239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.539268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.539624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.539654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.540040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.540070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.540232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.540264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.540645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.540675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.541048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.541078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.541452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.541482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.541879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.541909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.542268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.542299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.542633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.542664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.543042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.543071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.543435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.543464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.543808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.543839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.544207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-11-15 11:10:16.544238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-15 11:10:16.544636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.544666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.545050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.545080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.545444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.545474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.545705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.545735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.546094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.546123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.546527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.546576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.546912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.546940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.547294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.547325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.547623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.547661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.547912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.547941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.548214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.548243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.548408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.548437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.548808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.548838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.549197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.549225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.549592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.549622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.550011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.550040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.550401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.550429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.550774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.550803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.551157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.551187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.551622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.551655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.552027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.552055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.552413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.552441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.552803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.552833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.553192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.553220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.553582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.553612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.553974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.554005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.554349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.554379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.554764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.554795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-15 11:10:16.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-11-15 11:10:16.555166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.555416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.555444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.555795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.555827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.556195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.556224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.556489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.556525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.556924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.556955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.557309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.557337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.557592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.557625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.558029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.558059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.558292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.558694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.558724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.559091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.559120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.559371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.559403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.559680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.559711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.560064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.560094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.560520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.560551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.560926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.560956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.561312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.561342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.561715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.561746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.562114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.562143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.562510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.562538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.562914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.562946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.563284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.563314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.563687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.563718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.564082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.564112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.564470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.564500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.564873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.564903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.565156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.565534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.565573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.565941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.565970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-11-15 11:10:16.566229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-11-15 11:10:16.566258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.566642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.566672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.567081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.567111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.567498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.567527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.567971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.568002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.568369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.568399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.568760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.568793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.569131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.569162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.569522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.569553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.569960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.569989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.570241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.570273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.570452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.570481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.570831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.570861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.571249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.571279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.571631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.571668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.572031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.572061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.572433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.572462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.572813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.572845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.573199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.573229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.573583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.573613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.573866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.573895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.574250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.574280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.574635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.574665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.575020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.575049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.575388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.575417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.575796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.575826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.576183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.576213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.576587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.576619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.576965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.576994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.577367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.577396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.577767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.577798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.578175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.578204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.578446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.578721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.578751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.579101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.579132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.579468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.579497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.579798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.579828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-11-15 11:10:16.580188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-11-15 11:10:16.580218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.580590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.580622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.580981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.581010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.581366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.581395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.581791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.581822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.582196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.582228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.582574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.582606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.582965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.582995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.583359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.583389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.583745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.583776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.584013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.584045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.584442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.584808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.584840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.585238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.585582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.585614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.585970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.585999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.586363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.586392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.586855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.586892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.587266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.587296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.587648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.587678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.587995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.588024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.588271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.588302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.588656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.588687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.589035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.589065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.589416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.589445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.589787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.589817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.590173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.590202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.590582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.590613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.590969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.590998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.591357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.591387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.591723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.591755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.592116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.592507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.592925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.592957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.593319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.593348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.593715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.593746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.594160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.594190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.594582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.594613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-11-15 11:10:16.594987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-11-15 11:10:16.595016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.595383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.595412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.595796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.595826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.596180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.596209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.596646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.596677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.597032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.597062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.597444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.597474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.597833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.597864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.598209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.598239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.598597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.598628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.599010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.599039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.599377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.599406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.599875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.599906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.600134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.600166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.600516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.600903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.600934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.601290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.601319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.601691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.601721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.602095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.602124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.602480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.602515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.602902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.602932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.603300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.603331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.603682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.603713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.604090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.604119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.604494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.604523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.604923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.604954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.605318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.605348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.605708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.605739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.606102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.606131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.606496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.606525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.606902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.606932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.607286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.607317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.607685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.607716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.608094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.608123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.608503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.608541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.608923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.608955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.609322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.609350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.609711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.609742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.610086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-11-15 11:10:16.610117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-11-15 11:10:16.610478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.610507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.610942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.610973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.611345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.611374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.611740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.611770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.611998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.612028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.612408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.612438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.612687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.612720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.613089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.613119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.613485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.613515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.613876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.613906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.614265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.614295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.614667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.614699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.615044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.615074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.615475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.615504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.615873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.615903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.616275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.616306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.616663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.616694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.617024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.617054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.617413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.617442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.617698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.617728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.618118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.618153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.618543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.618585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.618951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.618984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.619340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.619370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.619717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.619748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.620099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.620129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.620381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.620410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.620767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.620797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.621149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.621180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.621541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.621582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.621924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.621954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.622193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.622225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.622605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.622635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.623051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.623417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.623447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.623801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.623832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.624181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.624210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.624619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-15 11:10:16.624648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-11-15 11:10:16.624902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.624933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.625313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.625342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.625595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.625625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.625979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.626008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.626253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.626287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.626639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.626668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.627049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.627078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.627444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.627473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.627823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.627854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.628207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.628239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.628602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.628633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.628982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.629013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.629387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.629416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.629786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.629817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.630183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.630212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.630584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.630616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.630983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.631012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.631358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.631387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.631731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.631763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.632134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.632164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.632531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.632581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.632957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.632988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.633343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.633379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.633733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.633764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.634126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.634155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.634395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.634423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.634661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.634694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.635069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.635099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.635457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.635486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.635828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.635858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.636228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.636257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.636616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-15 11:10:16.636646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-11-15 11:10:16.637041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.637072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.637425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.637456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.637816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.637847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.638224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.638609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.638640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.639000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.639029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.639366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.639395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.639775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.639806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.640171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.640200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.640558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.640613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.640940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.640970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.641332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.641362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.641734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.641766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.642132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.642161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.642528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.642556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.642920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.642951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.643304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.643333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.643689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.643720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.644078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.644107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.644475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.644505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.644763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.644792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.645165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.645195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.645431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.645463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.645709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.645740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.646013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.646046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.646402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.646433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.646795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.646827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.647191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.647221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.647590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.647622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.647970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.648001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.648371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.648408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.648748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.648779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.650843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.650911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.651343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.651379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.651638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.651669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.652036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.652066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.652446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.652477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.652841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-15 11:10:16.652872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-11-15 11:10:16.653227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.653256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.653633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.653664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.654052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.654081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.654415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.654445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.654789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.654821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.655184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.655214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.655478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.655511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.655887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.655920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.656273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.656302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.656642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.656672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.657043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.657073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.657437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.657466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.657847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.657879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.658244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.658277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.658637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.658667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.659022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.659051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.659379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.659408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.659767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.659799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.660188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.660520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.660551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.660861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.660892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.661130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.661162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.661540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.661582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.661931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.661961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.662340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.662371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.662710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.662755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.663143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.663192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.663588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.663641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.663856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.663891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.664269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.664300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.664662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.664693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.665057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.665089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.665451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.665494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.665836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.665867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.666238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.666268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.666627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.666658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.667014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.667410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-11-15 11:10:16.667440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-11-15 11:10:16.667825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-11-15 11:10:16.667856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-11-15 11:10:16.668217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-11-15 11:10:16.668246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-11-15 11:10:16.668619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-11-15 11:10:16.668648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-11-15 11:10:16.669029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-11-15 11:10:16.669058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-11-15 11:10:16.669429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-11-15 11:10:16.669458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-15 11:10:16.669872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-15 11:10:16.669904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.670331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.670362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.672237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.672299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.672724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.672760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.673128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.673159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.673559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.673603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.673809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.673838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.674168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.674197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.674475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.674505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.674897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.674927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.675287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.675323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.675690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.675722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.676064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.676094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.676445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.676476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.676737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.676771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.677152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.677182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.677538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.677581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.677978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.678008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.678242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.678275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.678633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.678664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.679044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.679074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.679424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.679454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.679800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.679832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.680246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.680275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.680544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.680585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.680844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.680874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.681226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.681257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.681641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.681672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.682007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.682038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.682409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.682445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.682688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.682718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.683081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.683111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.683453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.683483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.683816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.683847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.684149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.684179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.684555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.684597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.685016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.685045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-15 11:10:16.685404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-15 11:10:16.685435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.685690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.685720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.686046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.686077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.686452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.686483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.686869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.686900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.687345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.687374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.687736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.687767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.688136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.688165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.688522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.688553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.688903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.688933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.689184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.689216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.689578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.689610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.689964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.690333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.690363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.690795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.691142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.691172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.691552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.691613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.691952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.691984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.692347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.692376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.692723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.692756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.693109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.693138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.693503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.693534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.693812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.693843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.694261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.694291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.694642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.694672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.695074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.695461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.695491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.695883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.696252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.696281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.696645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.696677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.696908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.696937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.697290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.697319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.697684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.697721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.698083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.698111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.698464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.698493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.698863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.698894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.699251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.699279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.699667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.699699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-15 11:10:16.700036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-15 11:10:16.700067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.700421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.700450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.700831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.700863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.701162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.701190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.701543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.701582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.701928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.701958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.702314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.702345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.702698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.703106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.703135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.703487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.703516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.703870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.703901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.704348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.704378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.704732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.704763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.705132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.705161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.705536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.705599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.706004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.706033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.706306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.706335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.706754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.706786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.707154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.707183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.707546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.707591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.707946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.707978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.708342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.708372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.708744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.708776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.709138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.709168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.709541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.709583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.709865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.709899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.710262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.710292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.710660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.710690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.711056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.711086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.711438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.711469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.711801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.711834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.712186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.712216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.712588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.712619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.712976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.713005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.713453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.713491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.713777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.713807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.714043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.714075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.714425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-11-15 11:10:16.714455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-11-15 11:10:16.714873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.714903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.715243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.715274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.715648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.715680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.716031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.716061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.716420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.716449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.716784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.716814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.717168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.717198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.717580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.717611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.717841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.717873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.718291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.718321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.718686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.718719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.718981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.719010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.719355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.719385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.719646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.719677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.720068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.720096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.720439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.720469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.720849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.720880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.721220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.721249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.721497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.721526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.721923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.721955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.722366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.722397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.722777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.722809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.723171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.723201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.723617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.723650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.724017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.724047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.724398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.724426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.724826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.724858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.725145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.725175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.725507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.725537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.725896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.725927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.726317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.726691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.726721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.727087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.727117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.727487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.727516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.727886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.727917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.728284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.728313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.728660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.728938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-11-15 11:10:16.728971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-11-15 11:10:16.729374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.729403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.729751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.729783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.730147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.730178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.730521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.730550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.730930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.730961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.731308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.731340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.731602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.731633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.732013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.732044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.732394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.732424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.732826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.732858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.733110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.733139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.733440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.733701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.733731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.734115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.734145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.734517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.734547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.734923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.734953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.735319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.735350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.735694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.735724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.736106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.736135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.736508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.736538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.736808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.736839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.737224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.737254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.737629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.737661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.738031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.738060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.738435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.738465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.738731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.738761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.739109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.739140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.739484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.739515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.739789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.739820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.740222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.740252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.740607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.740638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-11-15 11:10:16.740930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-11-15 11:10:16.740959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.741337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.741367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.741712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.741744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.742118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.742148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.742509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.742539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.742933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.742965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.743324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.743355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.743729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.743771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.744116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.744146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.744453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.744482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.744848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.744878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.745127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.745156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.745391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.745422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.745793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.745825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.746101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.746131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.746540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.746583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.746937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.746967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.747333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.747363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.747737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.747768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.748130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.748161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.748554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.748965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.748994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.749333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.749363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.749748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.749779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.750148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.750178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.750530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.750581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.750962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.750994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.751367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.751396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.751759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.751789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.752157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.752186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.752546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.752587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.752940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.752971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.753311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.753341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.753696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.753728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.753954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.753992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.754398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.754428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.754783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.754816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.755172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.755202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-11-15 11:10:16.755467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-11-15 11:10:16.755495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.755850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.755881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.756290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.756320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.756715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.757096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.757126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.757482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.757512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.757883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.757915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.758269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.758299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.758547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.758591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.759031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.759060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.759428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.759459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.759811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.759843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.760087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.760120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.760347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.760376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.760741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.760773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.761018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.761047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.761283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.761313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.761698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.761730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.761975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.762004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.762408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.762438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.762881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.762912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.763139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.763168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.763407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.763437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.763702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.763733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.764102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.764132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.764481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.764511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.764872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.764903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.765146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.765176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.765542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.765583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.765839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.765869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.766302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.766333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.766553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.766608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.766846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.766879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.767236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.767266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.767631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.767661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.767934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.767963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.768216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.768252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.768643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-11-15 11:10:16.768674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-11-15 11:10:16.769046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.769076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.769447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.769476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.769839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.769869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.770283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.770313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.770644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.770683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.770968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.770997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.771371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.771402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.771747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.771779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.772129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.772159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.772393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.772422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.772640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.772670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.772918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.772948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.773215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.773244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.773654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.773685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.774052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.774081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.774450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.774479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.774838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.774869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.775230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.775260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.775621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.775652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.776097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.776127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.776371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.776400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.776747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.776777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.777079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.777109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.777491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.777521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.777932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.777963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.778317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.778345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.778599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.778631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.778975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.779006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.779380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.779411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.779864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.779895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.780328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.780359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.780737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.780768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.781002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.781032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.781411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.781441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.781820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.781850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.782238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.782267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.782634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.782664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.783044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-11-15 11:10:16.783074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-11-15 11:10:16.783436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.783471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.783838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.783868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.784231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.784260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.784629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.784661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.785008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.785037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.785229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.785258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.785629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.785659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.786010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.786039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.786408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.786439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.786787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.786819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.787178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.787207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.787558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.787618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.787971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.788000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.788310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.788341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.788687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.788717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.788953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.788984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.789349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.789379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.789760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.789790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.790175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.790205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.790543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.790586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.790977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.791006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.791245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.791274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.791623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.791654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.792004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.792033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.792408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.792437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.792814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.792845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.793095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.793130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.793486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.793516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.793867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.793898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.794300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.794330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.794683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.794712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.795109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.795139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.795389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.795420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.795654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.795685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.796063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.796093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.796442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.796473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.796817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.796848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.797189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-11-15 11:10:16.797219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-11-15 11:10:16.797582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.797614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.797891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.797921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.798288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.798324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.798683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.798950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.798980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.799336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.799365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.799728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.799761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.800119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.800151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.800517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.800546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.800811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.800844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.801195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.801225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.801582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.801613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.801979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.802009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.802244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.802521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.802550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.802922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.802953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.803302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.803332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.803702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.803733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.803988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.804021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.804239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.804268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.804635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.804666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.805029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.805060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.805421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.805450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.805810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.805840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.806189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.806219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.806553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.806614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.806966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.806996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.807346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.807376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.807738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.807769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.808128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.808157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.808527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.808558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.808934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.808964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-11-15 11:10:16.809330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-11-15 11:10:16.809360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.809722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.809752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.810153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.810183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.810544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.810585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.810955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.810983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.811357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.811389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.811737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.811769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.812121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.812150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.812494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.812524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.812905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.812935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.813302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.813338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.813685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.813716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.814092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.814121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.814478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.814507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.814852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.814883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.815251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.815281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.815636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.815668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.816064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.816094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.816454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.816483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.816825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.816855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.817212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.817242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.817618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.817650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.818020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.818271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.818301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.818703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.818734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.819042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.819072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.819317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.819349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.819708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.819741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.820137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.820166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.820533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.820586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.820948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.820978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.821353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.821382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.821627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.821657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.822015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.822045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.822407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.822437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.822808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.822838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.823211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.823240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.823597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.823629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-11-15 11:10:16.824026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-11-15 11:10:16.824056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.824424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.824453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.824841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.824872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.825147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.825177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.825473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.825503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.825892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.825922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.826285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.826315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.826681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.826712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.827046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.827077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.827330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.827362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.827707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.827738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.828074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.828103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.828457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.828493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.828865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.828896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.829230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.829260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.829425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.829456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.829809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.829840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.830196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.830225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.830488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.830517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.830878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.830908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.831281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.831310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.831677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.831709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.832073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.832103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.832463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.832493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.832856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.832886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.833256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.833285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.833649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.833680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.834037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.834069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.834428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.834459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.834799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.834830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.835194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.835223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.835478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.835508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.835899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.835931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.836294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.836324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.836586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.836617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.836987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.837017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.837371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.837400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.837797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.837828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-11-15 11:10:16.838257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-11-15 11:10:16.838286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.838643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.838673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.838938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.838970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.839350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.839379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.839767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.839799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.840148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.840177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.840540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.840581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.840912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.840942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.841303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.841332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.841676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.841709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.842076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.842105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.842467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.842496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.842860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.842889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.843241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.843272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.843625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.843662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.844035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.844064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.844418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.844448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.844796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.845182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.845211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.845583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.845614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.845965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.845995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.846395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.846423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.846788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.846819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.847183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.847212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.847583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.847613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.847975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.848006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.848355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.848385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.848730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.848761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.849131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.849160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.849514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.849543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.849937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.849968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.850330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.850360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.850718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.850751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.851111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.851141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.851501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.851531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.851901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.851931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.852295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.852326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.852580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.852611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-11-15 11:10:16.852984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-11-15 11:10:16.853014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.853354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.853384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.853716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.853747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.854053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.854084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.854330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.854363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.854684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.854723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.855083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.855112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.855474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.855503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.855860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.855892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.856263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.856293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.856654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.856684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.857039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.857071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.857412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.857440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.857794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.857826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.858186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.858215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.858585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.858616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.858976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.859011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.859369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.859400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.859747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.859778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.860148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.860178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.860541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.860581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.860927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.860957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.861325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.861356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.861726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.861757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.862120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.862150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.862510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.862539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.862930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.862962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.863320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.863349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.863732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.863764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.864124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.864154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.864522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.864552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.864914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.864943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.865316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.865345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.865692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.865724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-11-15 11:10:16.866054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-11-15 11:10:16.866084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.866453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.866482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.866881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.866912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.867266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.867296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.867653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.867684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.868030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.868061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.868427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.868458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.868824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.868854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.869020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.869049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.869412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.869441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.869786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.869817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.870214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.870243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.870602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.870633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.870994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.871022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.871380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.871409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.871771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.871803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.872186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.872215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.872583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.872615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.872955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.872985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.875402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.875470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.875946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.875985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.876352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.876381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.876733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.876772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.877164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.877193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.877594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.877948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.877979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.878336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.878365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.878621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.878654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.878919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.878948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.879301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.879330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.879689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.879719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.880006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.880445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.880475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.880811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.880843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.881097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.881127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.881574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.881604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.881976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.882342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.882373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-11-15 11:10:16.882733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-11-15 11:10:16.882763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.883206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.883235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.883632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.883666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.884024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.884055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.884423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.884452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.884814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.884845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.885213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.885244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.885605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.885635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.886017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.886045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.886379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.886409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.886785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.886815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.887178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.887210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.887447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.887476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.887829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.887859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.888067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.888095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.888472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.888501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.888874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.888904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.889269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.889298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.889648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.889678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.890011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.890041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.890408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.890437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.890825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.890855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.891220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.891249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.891596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.891627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.891891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.891926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.892276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.892305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.892599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.892630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.892992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.893021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.893388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.893418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.893797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.893828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.894171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.894202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.894576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.894607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.894946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.894976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.895357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.895386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.895739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.895768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.896111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.896141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.896501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.896532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.896896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-11-15 11:10:16.896927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-11-15 11:10:16.897264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.897294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.897656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.897687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.897973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.898328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.898358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.898612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.898642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.899048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.899078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.899478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.899507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.899939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.899969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.900337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.900367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.900731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.900761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.901129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.901158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.901611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.901643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.901998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.902027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.902385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.902415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.902800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.902839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.903207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.903239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.903638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.903670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.904042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.904071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.904426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.904455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.904794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.904826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.905234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.905263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.905505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.905838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.905869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.906217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.906246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.906618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.906649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.907016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.907047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.907298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.907334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.907705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.907737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.907987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.908355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.908385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.908636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.908666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.908958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.908986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.909342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.909371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.909626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.909658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.909888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.909919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.910303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.910332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.910695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.910726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.911104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-11-15 11:10:16.911142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-11-15 11:10:16.911515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.911543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.911917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.911947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.912401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.912432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.912808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.912839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.913197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.913227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.913588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.913619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.913982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.914011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.914405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.914784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.914814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.914990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.915020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.915384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.915414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.915778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.915808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.916161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.916190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.916559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.916603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.917007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.917037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.917397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.917426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.917822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.917853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.918114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.918143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.918476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.918506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.918870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.918901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.919272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.919302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.919661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.919691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.920057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.920086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.920451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.920481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.920821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.920851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.921278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.921310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.921650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.921681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.922044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.922074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.922425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.922459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.922852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.922882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.923316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.923344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.923720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.923751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.924107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.924138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.924495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.924824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.924853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.925231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.925260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.925625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.925657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.926020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.926050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-11-15 11:10:16.926416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-11-15 11:10:16.926445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.926808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.926838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.927216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.927245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.927517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.927546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.927943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.927973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.928334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.928365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.928730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.928761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.929130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.929159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.929522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.929551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.929905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.929934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.930298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.930330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.930687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.930718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.930994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.931023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.931370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.931401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.931786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.931816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.932173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.932202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.932576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.932609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.932966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.932996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.933247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.933276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.933627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.933658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.934030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.934060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.934498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.934529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.934902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.934932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.935206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.935234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.935626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.935656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.936057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.936087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.936453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.936482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.936831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.936861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.937220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.937250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.937612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.937643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.938022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.938060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.938489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.938754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.938786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-11-15 11:10:16.939141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-11-15 11:10:16.939172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.939534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.939573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.939932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.939963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.940342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.940741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.940771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.941137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.941171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.941527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.941557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.941951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.941982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.942366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.942396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.942757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.942787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.943145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.943174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.943537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.943577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.943907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.943936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.944293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.944321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.944683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.944714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.945086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.945115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.945480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.945509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.945903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.945934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.946305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.946334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.946676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.946707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.947071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.947100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.947464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.947493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.947937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.947967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.948305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.948335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.948631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.948662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.949043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.949072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.949440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.949468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.949820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.949850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.950089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.950122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.950488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.950517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.950900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.950930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.951297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.951327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.951680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.951710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.952063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.952094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.952450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.952480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.952742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.952772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.953025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.953053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.953450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-11-15 11:10:16.953811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-11-15 11:10:16.953841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.954192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.954222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.954663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.954693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.955020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.955049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.955411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.955440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.955794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.955826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.956134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.956163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.956524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.956554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.957005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.957034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.957280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.957309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.957664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.957695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.957939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.957967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.958316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.958346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.958766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.958797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.959151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.959182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.959429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.959458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.959839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.960206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.960235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.960617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.960646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.961035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.961063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.961420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.961450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.961688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.961721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.962062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.962092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.962335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.962364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.962738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.962768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.963132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.963161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.963532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.963580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.963913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.963943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.964309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.964544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.964586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.965000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.965029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.965276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.965308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.965583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.965615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.965991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.966020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.966478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.966508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.966874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.967265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.967294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.967549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.967591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-11-15 11:10:16.967939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-11-15 11:10:16.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.968393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.968426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.968840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.968872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.969246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.969275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.969625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.969656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.970032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.970061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.970418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.970447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.970907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.970939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.971284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.971314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.971654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.972028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.972058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.972424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.972452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.972713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.972744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-15 11:10:16.972994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-15 11:10:16.973026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.973380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.973408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.973795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.973826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.974189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.974219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.974582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.974612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.975045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.975074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.975439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.975469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.975829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.975861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.976201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.976231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.976594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.976626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.976883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.976911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.977180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.977210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.977457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.977487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.977661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.977691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.978081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.978110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.978494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.978529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.978907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.978940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.979331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.979360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.979735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.979765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.980203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.980232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.980605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.980636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.981002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.981031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.981415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.981445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.981794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.981825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.982203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.982232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.982583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.982614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.982979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.983009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.983391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.983420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.983796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.983827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.984089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.984118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.984505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.984880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.984911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.985254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.985284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.985673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.985704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.986066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.986097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.986439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-15 11:10:16.986468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-15 11:10:16.986886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.986916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.987146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.987177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.987544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.987585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.987953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.987983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.988360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.988388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.988745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.988775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.989163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.989193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.989636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.989667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.989995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.990034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.990390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.990419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.990746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.990775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.991123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.991152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.991522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.991550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.991903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.991933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.992297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.992326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.992676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.992707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.993077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.993106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.993458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.993487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.993837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.993868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.994128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.994167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.994583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.994613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.994979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.995009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.995374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.995403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.995816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.995846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.996204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.996233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.996592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.996623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.996996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.997025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.997321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.997349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.997750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.997781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.998033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.998062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.998244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.998273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.998636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.998666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.999058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.999086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.999522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.999553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:16.999942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:16.999972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-15 11:10:17.000339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-15 11:10:17.000368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.000715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.000747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.001104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.001133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.001508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.001536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.001923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.001954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.002287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.002318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.002662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.002694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.002927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.002957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.003296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.003326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.003738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.003768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.004143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.004173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.004529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.004558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.004939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.004968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.005254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.005632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.005663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.006022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.006053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.006427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.006456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.006809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.006841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.007196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.007226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.007623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.007984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.008014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.008348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.008377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.008739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.008769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.009014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.009046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.009282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.009319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.009656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.009686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.010061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.010091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.010457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.010485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.010854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.010886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.011237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.011267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.011642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.011673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.012032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.012063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.012432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.012462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.012820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.012851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.013227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.013258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.013617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.013647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.014012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-15 11:10:17.014043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-15 11:10:17.014430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.014459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.014814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.014846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.015215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.015245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.015690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.015721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.016102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.016131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.016541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.016597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.016946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.016975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.017152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.017529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.017558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.017938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.018311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.018339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.018705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.018737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.018959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.018988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.019345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.019374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.019729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.019760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.020132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.020163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.020533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.020572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.020943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.020973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.021369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.021399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.021794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.021826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.022061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.022093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.022441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.022470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.022868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.022899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.023243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.023272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.023524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.023554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.023959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.023990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.024239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.024269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.024519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.024560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.024918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.024948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.025294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.025324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.025593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.025622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.025971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.026000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.026257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.026288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.026670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.026701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.027069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.027098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.027465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.027494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.027909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.027939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.028303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-15 11:10:17.028332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-15 11:10:17.028694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.028725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.029013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.029043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.029428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.029457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.029803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.029835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.030205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.030235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.030597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.030629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.030994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.031023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.031383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.031412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.031769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.031800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.032217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.032246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.032619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.032650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.033099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.033129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.033552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.033591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.033912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.033942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.034311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.034341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.034724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.034755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.035130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.035159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.035517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.035546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.035897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.035927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.036293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.036322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.036668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.036699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.036953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.036982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.037271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.037300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.037733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.037764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.038123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.038152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.038500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.038529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.038912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.038943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.039277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.039307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.039547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.039588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.039836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-11-15 11:10:17.039870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-11-15 11:10:17.040227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.040257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.040613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.040645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.041023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.041053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.041286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.041317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.041679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.041710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.042105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.042473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.042504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.042873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.042904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.043261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.043291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.043645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.043676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.044049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.044080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.044415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.044445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.044831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.044862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.045228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.045257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.045600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.046004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.046035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.046447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.046477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.046812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.046843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.047182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.047212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.047605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.047640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.047877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.047909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.048274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.048307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.048695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.048727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.049107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.049335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.049366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.049731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.049762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.050130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.050159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.050518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.050548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.050917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.050947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.051310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.051339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.051616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.051651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.052000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.052031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.052395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.052425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.052794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.052827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.053223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.053251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.053595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.053626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-11-15 11:10:17.053980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-11-15 11:10:17.054009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.054385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.054414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.054789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.054819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.055219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.055582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.055614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.055996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.056393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.056423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.056792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.056824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.057183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.057211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.057582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.057613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.057979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.058008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.058368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.058397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.058732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.058764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.059121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.059151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.059402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.059431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.059805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.059836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.060148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.060178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.060588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.060619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.060984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.061013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.061387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.061417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.061850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.062219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.062248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.062595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.062626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.062969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.062999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.063369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.063398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.063743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.063775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.064152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.064182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.064535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.064582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.064949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.064988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.065354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.065383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.065717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.065750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.066108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.066137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.066499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.066528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.066916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.066947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.067186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.067219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.067391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-11-15 11:10:17.067421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-11-15 11:10:17.067761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.067792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.068161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.068191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.068561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.068607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.068953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.068985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.069345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.069374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.069710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.069743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.070097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.070127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.070482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.070519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.070918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.070948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.071307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.071337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.071702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.071733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.072089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.072118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.072377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.072406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.072756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.072787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.073130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.073160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.073519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.073548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.073920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.073950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.074313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.074343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.074708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.074739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.075105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.075135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.075497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.075528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.075904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.075935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.076295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.076326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.076693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.076724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.077103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.077133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.077496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.077524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.077921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.077952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.078310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.078339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.078717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.078748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.079146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.079176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.079529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.079559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.079920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.080296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.080326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.080691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.080722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.081099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.081129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.081483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.081513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.081923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.081954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.082298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-11-15 11:10:17.082329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-11-15 11:10:17.082752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.082784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.083124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.083155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.083487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.083516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.083763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.083798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.084159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.084189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.084555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.084597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.084920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.084951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.085321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.085351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.085735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.085765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.086150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.086188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.086536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.086579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.086913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.086943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.087305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.087335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.087697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.087728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.088079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.088110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.088358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.088388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.088744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.088776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.089159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.089189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.089544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.089583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.090000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.090030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.090394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.090423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.090812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.090843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.091208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.091238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.091602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.091634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.092027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.092057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.092309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.092342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.092724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.092756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.092979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.093012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.093383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.093414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.093790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.093821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.094200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.094230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.094618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.094648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.095021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.095051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.095415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.095444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.095811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.095843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.096128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.096158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.096501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.096532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.096902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-11-15 11:10:17.096933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-11-15 11:10:17.097294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.097323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.097767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.097798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.098167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.098196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.098546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.098585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.098851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.098879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.099226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.099255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.099624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.099976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.100004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.100354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.100383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.100746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.100777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.101143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.101172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.101533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.101587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.102019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.102048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.102416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.102445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.102823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.102855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.103182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.103586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.103617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.103983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.104013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.104369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.104398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.104627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.104661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.105035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.105064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.105441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.105470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.105838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.105868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.106223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.106252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.106609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.106639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.107015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.107045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.107379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.107408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.107745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.107776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.108136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.108165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.108524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.108553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.108990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.109020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.109354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.109384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-11-15 11:10:17.109744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-11-15 11:10:17.109775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.110113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.110143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.110548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.110588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.110935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.110965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.111345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.111374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.111709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.111740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.112104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.112134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.112486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.112515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.112775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.112805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.113146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.113176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.113544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.113586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.113928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.113958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.114195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.114223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.114580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.114610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.115025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.115054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.115297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.115328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.115700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.115731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.116107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.116136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.116499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.116527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.116924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.116962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.117307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.117337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.117699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.117730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.118117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.118146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.118508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.118537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.118922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.119287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.119317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.119699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.119739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.120103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.120132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.120515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.120544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.120964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.120994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.121353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.121381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.121735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.121765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.122144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.122175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.122548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.122588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.122879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.122908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.123265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.123294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.123660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.123691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.124053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.124082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-11-15 11:10:17.124433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-11-15 11:10:17.124462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.124712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.124744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.125112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.125142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.125376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.125404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.125824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.125854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.126084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.126115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.126535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.126575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.126951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.126981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.127342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.127372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.127743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.127774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.128140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.128169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.128421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.128450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.128668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.128700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.129052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.129082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.129382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.129410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.129744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.130144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.130174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.130607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.130637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.131010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.131040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.131407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.131437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.131788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.131819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.132231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.132480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.132509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.132907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.132937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.133301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.133780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.133811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.134075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.134103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.134458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.134487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.134862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.134893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.135228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.135256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.135619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.135649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.136023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.136053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.136425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.136454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.136792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.136823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.137181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.137210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.137583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.137614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.137977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.138006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.138374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.138403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-11-15 11:10:17.138759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-11-15 11:10:17.138789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.139159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.139188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.139547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.139589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.139930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.139961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.140325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.140354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.140715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.140747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.141168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.141197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.141552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.141591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.141964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.141993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.142330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.142360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.142734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.142766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.143124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.143154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.143518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.143547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.143893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.143923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.144282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.144312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.144671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.144703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.145055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.145085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.145333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.145368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.145708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.145740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.146106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.146136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.146493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.146522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.146861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.146891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.147320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.147349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.147684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.147715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.148132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.148161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.148407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.148439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.148832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.148862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.149203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.149233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.149620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.149650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.149909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.149938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.150300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.150329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.150700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.150730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.151093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.151122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.151575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.151607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.151993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.152023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.152393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.152422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.152798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.152829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.153194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.153224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.153593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-11-15 11:10:17.153623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-11-15 11:10:17.153974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.154003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.154349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.154378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.154742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.154773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.155136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.155166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.155497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.155526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.155886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.155916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.156276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.156305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.156684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.156714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.157082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.157111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.157479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.157507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.157871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.157901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.158263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.158298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.158668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.158697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.159036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.159065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.159425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.159454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.159826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.159856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.160202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.160231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.160595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.160626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.160879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.160911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.161086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.161114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.161477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.161506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.161888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.161919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.162298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.162328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.162692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.162723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.163091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.163120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.163496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.163525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.163878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.163908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.164250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.164281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.164627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.165003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.165033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.165409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.165438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.165790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.165821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.166106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.166135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-11-15 11:10:17.166493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-11-15 11:10:17.166522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.166893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.166923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.167270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.167299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.167667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.167697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.167950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.167979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.168337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.168367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.168540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.168591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.168959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.168989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.169338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.169368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.169725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.169756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.170113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.170142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.170508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.170536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.170884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.170914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.171278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.171308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.171677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.171706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.172085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.172114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.172391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.172421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.172799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.172828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.173101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.173136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.173519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.173548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.173933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.173962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.174334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.174364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.174618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.174650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.174901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.174930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.175272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.175302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.175662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.175692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.176056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.176085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.176455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.176484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.176872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.176902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.177241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.177271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.177632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.177663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.178017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.178046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.178437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.178466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.178815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.178847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.179107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.179136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.179504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.179532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.179936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.179966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.180333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.180362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-11-15 11:10:17.180722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-11-15 11:10:17.180751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.181165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.181194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.181522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.181551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.181942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.181972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.182338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.182368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.182763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.183114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.183144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.183539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.183579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.183978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.184007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.184378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.184406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.184743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.184773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.185139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.185168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.185531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.185560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.185941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.185972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.186342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.186371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.186729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.186760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.187178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.187207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.187572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.187601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.187869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.187898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.188254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.188284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.188645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.188681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.189049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.189078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.189449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.189479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.189816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.189846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.190092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.190124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.190505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.190535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.190915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.190947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.191302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.191332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.191698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.191730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.192101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.192130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.192501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.192531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.192797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.192827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.193214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.193243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.193614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.193644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.193991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.194021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.194359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.194390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.194736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.194766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.195125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.195154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-11-15 11:10:17.195522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-11-15 11:10:17.195551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.195921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.195951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.196317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.196348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.196603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.196636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.196996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.197026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.197392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.197421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.197793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.197823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.198179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.198208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.198583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.198614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.198845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.198877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.199227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.199259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.199630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.199660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.200022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.200051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.200411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.200440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.200808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.200838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.201135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.201163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.201499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.201528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.201994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.202024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.202378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.202407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.202775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.202804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.203159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.203188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.203551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.203590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.203946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.203982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.204343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.204372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.204724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.204756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.205122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.205150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.205503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.205533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.205913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.205944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.206375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.206404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.206748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.206779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.207143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.207172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.207534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.207572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.207946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.207975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.208336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.208365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.208721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.208753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.209117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.209146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.209539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.209912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.209942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-11-15 11:10:17.210311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-11-15 11:10:17.210340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.210708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.210738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.211101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.211130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.211487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.211516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.211887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.211917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.212182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.212211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.212561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.212603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.212949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.212979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.213315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.213344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.213601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.213635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.214003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.214034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.214389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.214419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.214787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.214817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.215187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.215217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.215591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.215621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.215883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.215912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.216265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.216294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.216654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.216684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.217096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.217125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.217477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.217506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.217877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.217908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.218350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.218379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.218641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.218672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.219018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.219047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.219421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.219456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.219829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.219859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.220233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.220263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.220627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.220658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.220926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.220956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.221319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.221348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.221714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.221747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.222010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.222039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.222395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.222424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.222785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.222816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.223227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.223257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.223657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.223687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.223939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.223968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.224325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.224354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-11-15 11:10:17.224651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-11-15 11:10:17.224681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.225046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.225075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.225435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.225464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.225741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.225771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.226150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.226179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.226553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.226598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.226971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.227334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.227364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.227626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.227658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.228108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.228137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.228381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.228410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.228771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.228801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.229168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.229197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.229576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.229607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.230008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.230039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.230397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.230426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.230790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.230820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.231194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.231223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.231471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.231503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.231912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.231943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.232317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.232346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.232704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.232735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.233098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.233127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.233477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.233506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.233703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.233734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.233959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.233996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.234366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.234402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.234752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.234783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.235195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.235576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.235607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.235968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.235997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.236354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.236385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.236752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.236782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.237160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-11-15 11:10:17.237189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-11-15 11:10:17.237633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.237664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.238002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.238033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.238399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.238428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.238832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.238863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.239225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.239254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.239640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.239670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.240036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.240066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.240437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.240466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.240819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.240850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.241200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.241229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.241602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.241633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.242004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.242034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.242403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.242433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.242798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.242829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.243173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.243203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.243582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.243612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.243982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.244011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.244371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.244399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.244753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.244784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.245145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.245175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.245540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.245592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.245947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.245977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.246271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.246300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.246560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.246601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.246981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.247011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.247387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.247416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.247787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.247817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.248178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.248208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.248578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.248608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.249017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.249047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.249390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.249421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.249754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.250127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.250163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.250523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.250553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.250909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.250939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.251199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.251228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.251482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.251512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.251974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-11-15 11:10:17.252005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-11-15 11:10:17.252343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.252374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.252732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.252763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.253149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.253178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.253586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.253945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.253974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.254325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.254356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.254712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.254743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.254989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.255018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.255378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.255408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.255659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.255691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.256065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.256095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.256461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.256491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.256857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.256886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.257238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.257267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.257535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.257930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.257960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.258191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.258219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.258377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.258404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.258662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.258693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.259063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.259094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.259835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.259866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.260218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.260248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.260622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.260653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.261028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.261057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.261426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.261457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.261793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.261823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.262234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.262264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.262704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.262735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.263099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.263129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.263504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.263533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.263778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.263808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.264191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.264219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.264583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.264616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.264987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.265273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.265302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-11-15 11:10:17.265543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-11-15 11:10:17.265582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.265926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.265959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.266326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.266711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.266742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.266985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.267014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.267380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.267409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.267784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.267815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.268184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.268213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.268584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.268617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.268959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.268989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.269346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.269376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.269742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.269772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.270119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.270149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.270534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.270576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.270932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.270961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.271346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.271374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.271739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.271770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.272122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.272153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.272500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.272529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.272936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.272967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.273317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.273347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.273698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-15 11:10:17.273728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-15 11:10:17.274166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.274196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.274556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.274610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.274971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.275000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.275355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.275385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.275752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.275782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.276148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.276177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.276546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.276584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.276894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.276923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.277323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.277352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.277688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.277718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.278067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.278096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.278375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.278404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.278742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.278772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.279106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.279262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.279295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.279677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.279708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.280084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.280119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.280471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.280502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.280837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.280866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.281232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.281262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.281642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.281673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.281953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.281981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.282330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.282359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.282793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.283149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.283546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.283595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.283938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.283968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.284330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.284358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.284810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.284841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.285196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.285225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.285575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.285605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.285975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.286004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.286374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-15 11:10:17.286411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-15 11:10:17.286790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.286820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.287064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.287092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.287485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.287902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.287932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.288279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.288310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.288692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.288723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.289099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.289128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.289495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.289523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.289852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.289882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.290127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.290159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.290531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.290582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.290949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.290978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.291333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.291393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.291628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.291659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.291955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.291984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.292358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.292387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.292636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.292669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.293050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.293079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.293460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.293489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.293758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.293793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.294144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.294173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.294542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.294584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.295013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.295045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.295406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.295442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.295810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.295842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.296210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.296240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.296641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.296673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.297112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.297141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.297501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.297529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.297775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.297805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.298180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.298211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.298601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.298633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.298954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.298984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.299353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-15 11:10:17.299382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-15 11:10:17.299743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.299772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.300138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.300168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.300529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.300559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.300964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.300994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.301360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.301389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.301736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.301768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.302136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.302164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.302599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.302630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.303005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.303035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.303286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.303315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.303683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.303713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.304100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.304128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.304475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.304508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.304901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.304931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.305299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.305329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.305720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.305751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.306121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.306151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.306504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.306533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.306891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.306922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.307285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.307313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.307684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.307714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.308088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.308117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.308484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.308513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.308878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.308910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.309279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.309308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.309672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.310060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.310091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.310423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.310451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.310785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.310815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.311146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.311181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.311526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.311555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.311916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.311946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.312303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.312332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.312710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.312741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.313101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.313130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.313531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.313561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.313773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.313803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.314194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-15 11:10:17.314222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-15 11:10:17.314589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.314621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.314980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.315010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.315412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.315440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.315700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.315730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.316101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.316131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.316490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.316519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.316947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.316978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.317413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.317443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.317823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.317854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.318194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.318223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.318585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.318616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.318969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.318998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.319338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.319367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.319730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.319759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.320024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.320053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.320408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.320437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.320785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.320815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.321183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.321214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.321603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.321633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.322017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.322045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.322461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.322490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.322735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.322769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.323146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.323176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.323465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.323493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.323863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.323893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.324264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.324293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.324651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.324682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.325054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.325448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.325477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.325833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.325863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.326219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.326248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.326545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.326608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.327006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.327035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.327398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.327426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.327790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.327821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.328181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.328210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.328588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.328618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.328876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-15 11:10:17.328907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-15 11:10:17.329283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.329312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.329549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.329587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.329960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.329990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.330348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.330378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.330738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.330769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.331131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.331160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.331520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.331548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.331929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.331959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.332327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.332357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.332714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.332745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.333109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.333138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.333399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.333428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.333835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.333866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.334229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.334257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.334622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.334660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.335023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.335052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.335420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.335448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.335807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.335838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.336202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.336232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.336596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.336627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.337015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.337044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.337286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.337315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.337677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.337709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.338093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.338122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.338479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.338508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.338901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.338931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.339292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.339322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.339691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.339721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.340092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.340121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.340468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.340496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.340856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.340887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.341235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-15 11:10:17.341266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-15 11:10:17.341641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.341671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.342051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.342080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.342459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.342489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.342822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.342853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.343103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.343131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.343549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.343589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.343945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.343973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.344333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.344362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.344720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.344752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.345113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.345142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.345504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.345533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.345890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.345921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.346269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.346297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.346662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.346693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.347037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.347066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.347424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.347454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.347794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.347824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.348125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.348155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.348398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.348427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.348789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.348820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.349189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.349218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.349581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.349612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.349991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.350021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.350315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.350344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.350687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.350718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.351085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.351115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.351476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.351504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.351884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.351914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.352275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.352310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.352673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.352704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.353058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.353089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.353443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.353472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.353715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.353747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.354148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.354178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.354550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.354588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.354997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.355026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.355382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.355411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.355792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-15 11:10:17.355822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-15 11:10:17.356190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.356219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.356585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.356616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.356969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.356997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.357359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.357388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.357756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.357787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.358166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.358195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.358553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.358604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.358937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.358968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.359320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.359350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.359712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.359742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.360097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.360127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.360488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.360517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.360878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.360907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.361281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.361311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.361676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.361708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.362071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.362099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.362449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.362478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.362838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.362868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.363125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.363157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.363382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.363414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.363776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.363806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.364059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.364088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.364438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.364468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.364811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.364843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.365172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.365200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.365555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.365596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.365933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.365962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.366355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.366717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.366748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.367116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.367145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.367506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.367542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.367897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.367928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.368297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.368326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.368689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.368720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.369082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.369110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.369469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.369498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.369864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.369894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.370259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-15 11:10:17.370287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-15 11:10:17.370672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.370702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.371055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.371084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.371456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.371485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.371820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.371851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.372086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.372118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.372502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.372532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.372906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.372937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.373299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.373329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.373592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.373623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.373971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.374000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.374242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.374273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.374517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.374546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.374958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.374988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.375333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.375363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.375736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.375767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.376138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.376167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.376526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.376555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.376930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.376959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.377318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.377347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.377712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.377742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.378106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.378135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.378499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.378528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.378904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.378934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.379297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.379326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.379664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.379696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.380033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.380062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.380433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.380462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.380819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.380848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.381206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.381237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.381596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.381627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.382009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.382037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.382404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.382433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.382601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.382636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.382935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.382964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.383285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.383315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.383680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.383711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.384076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.384105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.384454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.384483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-15 11:10:17.384825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-15 11:10:17.384856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.385224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.385253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.385519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.385548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.385890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.385921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.386261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.386292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.386652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.386682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.387033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.387062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.387426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.387455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.387809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.387840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.388204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.388233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.388593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.388623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.388984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.389013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.389386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.389417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.389808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.389838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.390134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.390162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.390498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.390527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.390892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.390922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.391300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.391329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.391749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.391780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.392030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.392059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.392442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.392471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.392825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.392855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.393212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.393242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.393600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.393629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.393980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.394009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.394411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.394701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.394731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.395122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.395152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.395523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.395553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.395906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.395936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.396303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.396332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.396695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.396725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.397097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.397125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.397473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-15 11:10:17.397502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-15 11:10:17.397862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.397898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.398334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.398364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.398717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.398747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.399099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.399128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.399489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.399518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.399891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.399921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.400274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.400305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.400675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.400706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.401138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.401166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.401531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.401893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.401923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.402281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.402309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.402676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.402705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.403068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.403097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.403456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.403485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.403740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.403769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.404127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.404156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.404533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.404573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.404924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.404953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.405326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.405355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.405711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.405741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.406097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.406127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.406487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.406515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.406881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.407156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.407185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.407538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.407577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.407931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.407961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.408322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.408351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.408722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.408753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.409117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.409146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.409508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.409537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.409927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.409957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.410318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.410347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.410595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.410625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.410894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.410925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.411288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.411317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.411711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.411741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-15 11:10:17.411977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-15 11:10:17.412009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.412379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.412409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.412666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.412696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.413054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.413090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.413422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.413452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.413811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.413841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.414210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.414239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.414599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.414629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.414992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.415021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.415385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.415415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.415762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.415792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.416150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.416179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.416538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.416575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.416912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.416941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.417305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.417333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.417573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.417983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.418383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.418412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.418759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.418790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.419149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.419179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.419552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.419592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.419898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.419926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.420188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.420221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.420596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.420627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.420976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.421005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.421371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.421400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.421682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.421711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.422112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.422141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.422504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.422533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.422772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.422802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.423189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.423218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.423586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.423617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.423981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.424010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.424348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.424377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.424737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.424768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.425125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.425154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.425528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.425558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.425840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.425870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-15 11:10:17.426235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-15 11:10:17.426265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.426630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.426661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.427041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.427070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.427418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.427803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.427834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.428194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.428230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.428585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.428615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.428972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.429002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.429359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.429388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.429746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.429776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.430209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.430238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.430593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.430624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.430998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.431027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.431385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.431414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.431786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.431817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.432191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.432220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.432585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.432615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.432976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.433005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.433385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.433414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.433785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.433816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.434173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.434202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.434560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.434599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.434951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.434981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.435357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.435387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.435646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.435676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.436021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.436051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.436427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.436456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.436790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.436822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.437178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.437206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.437574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.437605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.437965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.437997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.438366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.438734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.438764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.439198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.439227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.439659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.439689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.440062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.440090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.440336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.440364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.440733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.440763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.441101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-15 11:10:17.441130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-15 11:10:17.441499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.441529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.441864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.441896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.442251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.442281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.442530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.442560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.442925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.442955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.443316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.443346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.443703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.443741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.444107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.444137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.444506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.444535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.444876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.444906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.445274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.445303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.445720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.445750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.446101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.446132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.446502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.446531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.446915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.446946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.447306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.447335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.447737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.447767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.448121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.448151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.448508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.448539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.448735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.448765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.449136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.449166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.449525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.449554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.449924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.449954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.450206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.450592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.450623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.451022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.451052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.451387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.451416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.451777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.451808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.452294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.452657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.452688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.453032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.453060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.453420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.453449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.453696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.453730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.454106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.454137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b9/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 582118 Killed "${NVMF_APP[@]}" "$@" 00:29:58.059 0 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.454499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.454528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-15 11:10:17.454900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.454931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:58.059 [2024-11-15 11:10:17.455173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-15 11:10:17.455203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:58.060 [2024-11-15 11:10:17.455542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.455585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.060 [2024-11-15 11:10:17.455966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.455996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.060 [2024-11-15 11:10:17.456246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.456275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.456673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.456704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.457061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.457091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.457452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.457481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.457851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.457881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.458248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.458276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.458617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.458648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.459025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.459054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.459415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.459444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.459711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.459741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.460111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.460141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.460498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.460527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.460905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.460936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.461302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.461332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.461629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.461659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.461934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.461963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.462213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.462242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.462615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.462645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.463029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.463060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.463397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.463426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.463796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.463827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-15 11:10:17.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.464203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=583152 00:29:58.060 [2024-11-15 11:10:17.464603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.464635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 583152 00:29:58.060 [2024-11-15 11:10:17.465006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.465036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 583152 ']' 00:29:58.060 [2024-11-15 11:10:17.465399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.465430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.060 [2024-11-15 11:10:17.465702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.465734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b9 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:58.060 0 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.060 [2024-11-15 11:10:17.466136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-15 11:10:17.466168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.061 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:58.061 [2024-11-15 11:10:17.466411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 11:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.061 [2024-11-15 11:10:17.466443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.466809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.466840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.467203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.467234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.467599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.467631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.467999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.468031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.468391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.468821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.468853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.469223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.469257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.469634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.469666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.469921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.469953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.470328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.470360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.470739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.470771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.471109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.471146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.471435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.471466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.471717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.471751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.472130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.472160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.472525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.472556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.472904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.472935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.473273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.473304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.473515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.473548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.473816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.473847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.474201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.474231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.474681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.474714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.475090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.475121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.475365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.475396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.475662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.475694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.476107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.476144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.476497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.476528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.476925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.476957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.477317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.477347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.477644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.477676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.478062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.478092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.478326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.478356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.478736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.478768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.479145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.479174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.479536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.479575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.479943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-15 11:10:17.479973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-15 11:10:17.480348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.480377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.480739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.480769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.481034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.481063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.481449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.481479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.481846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.481875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.482239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.482268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.482534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.482576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.482987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.483018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.483390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.483418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.483795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.483825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.484178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.484208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.484493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.484523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.484885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.484916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.485180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.485210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.485444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.485474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.485872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.485905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.486275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.486307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.486593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.486624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.486906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.486936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.487326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.487355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.487735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.487768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.488057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.488088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.488447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.488477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.488894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.488925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.489294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.489322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.489651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.489681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.489917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.489946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.490322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.490351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.490624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.490654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.491069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.491104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.491428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.491457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.491833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.491865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.492235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.492265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.492637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.492666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.492909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.492938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.493322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.493354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.493708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.493739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.494082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-15 11:10:17.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-15 11:10:17.494484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.494513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.494979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.495010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.495369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.495398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.495765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.495795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.496127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.496157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.496601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.496632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.497006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.497036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.497285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.497315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.497643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.497674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.498041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.498071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.498456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.498487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.498837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.498867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.499299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.499328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.499689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.499720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.499946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.499978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.500349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.500378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.500744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.500774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.501159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.501191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.501436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.501467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.501808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.501837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.502170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.502199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.502602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.502633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.503002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.503030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.503372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.503402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.503652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.503685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.504038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.504067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.504377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.504406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.504798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.504829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.505195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.505224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.505672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.505703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.506069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.506098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.506337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.506375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.506629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.506661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.507013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.507041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.507413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.507442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.507711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.507741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.507989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.508019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.508412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-15 11:10:17.508442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-15 11:10:17.508869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.508899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.509248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.509278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.509646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.509676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.510046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.510075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.510443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.510472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.510838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.510869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.511240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.511270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.511627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.511658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.512028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.512059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.512445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.512475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.512836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.512868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.513208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.513237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.513619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.513648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.514058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.514088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.514469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.514500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.514757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.514788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.515180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.515209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.515580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.515611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.515856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.515887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.516264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.516295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.516659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.516691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.517076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.517105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.517324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.517353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.517744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.517774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.518165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.518583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.518614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.518972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.519001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.519270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.519656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.519688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.519956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.519985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.520365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.520397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.520756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.520787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.521157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-15 11:10:17.521186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-15 11:10:17.521528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.521574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.521828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.521858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.522212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.522242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.522621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.522651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.523003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.523033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.523162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.523196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.523381] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:29:58.065 [2024-11-15 11:10:17.523445] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.065 [2024-11-15 11:10:17.523606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.523637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.524017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.524046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.524283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.524312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.524557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.524601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.524886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.524917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.525275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.525305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.525647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.525680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.525824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.525856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.526257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.526288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.526659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.526694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.527067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.527097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.527481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.527511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.527882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.527913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.528284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.528313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.528549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.528589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.529002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.529031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.529376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.529407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.529795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.529825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.530176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.530206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.530663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.530693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.530948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.530980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.531225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.531256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.531674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.531704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.532059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.532088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.532468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.532499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.532866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.532897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.533260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.533290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.533648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.533678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.534081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.534111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.534461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.534490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.534877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.065 [2024-11-15 11:10:17.534908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.065 qpair failed and we were unable to recover it. 00:29:58.065 [2024-11-15 11:10:17.535027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.535056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.535433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.535463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.535700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.535740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.536093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.536123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.536368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.536397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.536760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.536791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.536928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.536958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.537303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.537332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.537590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.537622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.538065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.538095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.538428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.538458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.538723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.538757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.539028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.539058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.539414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.539443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.539881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.539911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.540141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.540170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.540552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.540593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.540953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.540983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.541344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.541374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.541717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.541747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.542141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.542491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.542521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.542890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.542921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.543279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.543309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.543688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.543718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.543972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.544004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.544356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.544385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.544730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.544761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.545127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.545156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.545394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.545423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.545662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.545692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.546080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.546110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.546448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.546726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.546755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.547150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.547179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.547409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.547440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.547820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.547851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.548214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.066 [2024-11-15 11:10:17.548244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.066 qpair failed and we were unable to recover it. 00:29:58.066 [2024-11-15 11:10:17.548612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.548641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.548974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.549004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.549384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.549413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.549766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.549795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.550186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.550221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.550572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.550605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.550995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.551024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.551323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.551353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.551711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.551743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.552089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.552119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.552484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.552513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.552878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.552908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.553265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.553295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.553644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.553674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.554052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.554081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.554434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.554464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.554720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.554750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.555102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.555132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.555496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.555526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.555759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.555789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.556165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.556577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.556608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.556963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.556993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.557362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.557391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.557795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.557827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.558224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.558580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.558610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.559025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.559055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.559421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.559450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.559824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.559854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.560207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.560235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.560573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.560605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.560836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.560866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.561213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.561243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.561597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.561627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.562023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.067 [2024-11-15 11:10:17.562051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.067 qpair failed and we were unable to recover it. 00:29:58.067 [2024-11-15 11:10:17.562426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.068 [2024-11-15 11:10:17.562457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.068 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.562839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.562873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.563214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.563244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.563598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.563629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.564044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.564424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.564498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.564969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.565000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.565403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.565432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.565885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.565924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.566263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.566293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.566660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.566691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.567070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.567099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.567468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.567497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.567860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.567891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.568233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.568264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.568614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.568645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.568946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.568974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.569416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.569445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.569829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.569859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.570209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.570239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.570608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.570639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.570995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.571025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.571398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.571427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.571788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.571819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.572203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.572411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.572440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.572776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.572806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.573152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.573181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.573547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.573587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.573918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.344 [2024-11-15 11:10:17.573947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.344 qpair failed and we were unable to recover it. 00:29:58.344 [2024-11-15 11:10:17.574326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.574355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.574704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.574736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.575167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.575196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.575559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.575601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.575969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.575999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.576234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.576263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.576637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.576667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.577025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.577055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.577412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.577441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.577831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.577861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.578222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.578252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.578623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.578654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.579033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.579062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.579281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.579311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.579571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.579600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.579985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.580015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.580374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.580407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.580763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.580792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.581134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.581170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.581443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.581477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.581734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.582160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.582191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.582627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.582658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.583020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.583049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.583419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.583449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.583822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.583852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.584219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.584248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.584512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.584541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.584798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.584828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.585189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.585218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.585572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.585602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.586044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.586074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.586433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.586463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.586837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.586869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.587130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.587158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.587494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.587523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.587951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.587982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.345 [2024-11-15 11:10:17.588343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.345 [2024-11-15 11:10:17.588372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.345 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.588753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.588783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.589150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.589180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.589622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.589653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.590028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.590060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.590421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.590450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.590792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.590823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.591193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.591222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.591582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.591613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.591974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.592004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.592249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.592278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.592656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.592688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.593060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.593090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.593507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.593537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.593902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.593933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.594352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.594382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.594720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.594750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.595117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.595147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.595501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.595531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.595764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.595793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.596155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.596184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.596547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.596600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.597008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.597038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.597401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.597431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.597789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.597820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.598175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.598204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.598463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.598491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.598775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.598807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.599180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.599208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.599589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.599962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.599994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.600367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.600398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.600824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.600855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.601084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.601113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.601473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.601502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.601873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.601904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.602256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.602287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.602647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.602679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.602974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.346 [2024-11-15 11:10:17.603004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.346 qpair failed and we were unable to recover it. 00:29:58.346 [2024-11-15 11:10:17.603345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.603375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.603741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.603772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.604039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.604067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.604397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.604428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.604827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.604857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.605218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.605248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.605647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.605918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.605947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.606393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.606423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.606797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.606830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.607184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.607215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.607583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.607614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.607956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.607986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.608357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.608386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.608747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.608779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.609150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.609181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.609547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.609587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.609949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.609978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.610324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.610354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.610734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.610765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.611129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.611158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.611570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.611602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.611962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.611997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.612346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.612375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.612747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.612777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.613148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.613177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.613549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.613591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.613953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.613983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.614360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.614389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.614644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.614675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.615053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.615083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.615317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.615347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.615581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.615611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.616034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.616065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.616411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.616441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.616813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.616844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.617208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.617239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.617502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.347 [2024-11-15 11:10:17.617533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-11-15 11:10:17.617946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.617976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.618342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.618372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.618722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.618754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.619124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.619154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.619516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.619547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.619920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.619950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.620363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.620393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.620763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.620795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.621161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.621191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.621578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.621609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.621836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.621870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.622265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.622296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.622653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.622685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.623064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.623093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.623428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.623459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.623828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.623858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.624228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.624257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.624619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.624649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.625009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.625039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.625384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.625414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.625789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.625818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.626081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.626114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.626460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.626491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.626878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.626910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.627279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.627316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.627722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.627753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.628144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.628531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.628561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.628822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.348 [2024-11-15 11:10:17.628854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-11-15 11:10:17.629202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.629231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.629465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.629493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.629730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.629761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.630199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.630227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.630586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.630616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.630744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.349 [2024-11-15 11:10:17.630952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.630980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.631376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.631406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.631648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.631680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.632039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.632077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.632324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.632355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.632733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.632763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.633118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.633149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.633512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.633541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.633929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.633959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.634338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.634367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.634724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.634755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.635008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.635036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.635381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.635411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.635639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.635669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.636054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.636083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.636450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.636480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.636928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.636958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.637334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.637363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.637793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.637823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.638184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.638213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.638500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.638528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.638918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.638950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.639306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.639335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.639717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.639747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.640124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.349 [2024-11-15 11:10:17.640154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.349 qpair failed and we were unable to recover it. 00:29:58.349 [2024-11-15 11:10:17.640508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.640919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.640951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.641298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.641328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.641687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.641719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.642068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.642099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.642542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.642583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.642991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.643020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.643387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.643416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.643790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.643821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.644192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.644221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.644583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.644615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.644993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.645022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.645389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.645419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.645757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.645787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.646149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.646178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.646448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.646476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.646880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.646911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.647258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.647722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.647759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.648112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.648141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.648484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.648514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.648918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.648949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.649305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.649343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.649584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.649617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.649923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.649953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.650311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.650341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.650702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.650732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.651105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.651134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.651501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.651530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.651894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.651925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.652281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.652309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.652679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.652710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.653079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.653109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.653465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.653494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.653853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.350 [2024-11-15 11:10:17.653884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.350 qpair failed and we were unable to recover it. 00:29:58.350 [2024-11-15 11:10:17.654251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.654280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.654525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.654553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.654936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.654965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.655331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.655360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.655724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.655753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.656132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.656161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.656524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.656554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.657010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.657040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.657403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.657432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.657869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.658250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.658280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.658640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.658670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.658909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.658938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.659277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.659309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.659683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.659714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.659973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.660001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.660358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.660388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.660649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.660678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.661033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.661063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.661423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.661452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.661822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.661852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.662222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.662252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.662612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.662642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.662897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.662925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.663330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.663363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.663616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.663646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.664034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.664062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.664421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.664450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.664817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.664847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.665220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.665249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.665618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.665649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.666014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.666043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.351 [2024-11-15 11:10:17.666408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.351 [2024-11-15 11:10:17.666438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.351 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.666835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.666865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.667270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.667299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.667522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.667551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.667844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.667874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.668236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.668265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.668625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.668656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.669038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.669067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.669444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.669472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.669818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.669849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.670216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.670245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.670647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.671038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.671395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.671426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.671803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.671833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.672197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.672225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.672602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.672633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.673017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.673046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.673410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.673444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.673803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.673833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.674198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.674227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.674586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.674616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.675025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.675054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.675308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.675337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.675700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.675730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.676100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.676128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.676496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.676524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.676943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.676975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.677388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.677417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.677780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.677811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.678075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.678104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.678472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.678501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.678885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.352 [2024-11-15 11:10:17.678918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.352 qpair failed and we were unable to recover it. 00:29:58.352 [2024-11-15 11:10:17.679263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.679294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.679645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.679675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.679938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.679970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.680355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.680385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.680737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.680769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.681101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.681131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.681496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.681525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.681773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.681806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.682276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.682306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.682341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.353 [2024-11-15 11:10:17.682382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.353 [2024-11-15 11:10:17.682390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.353 [2024-11-15 11:10:17.682398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.353 [2024-11-15 11:10:17.682405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.353 [2024-11-15 11:10:17.682691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.683096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.683126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.683505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.683924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.683954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.684297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.684327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.684511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:58.353 [2024-11-15 11:10:17.684690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.684721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.684615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:58.353 [2024-11-15 11:10:17.684818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:58.353 [2024-11-15 11:10:17.684820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:58.353 [2024-11-15 11:10:17.685078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.685107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.685377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.685409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.685676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.685706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.686058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.686088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.686470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.686498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.686863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.686894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.687268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.687297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.687584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.687990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.688019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.688390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.688419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.688762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.688792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.689025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.689055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.689433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.689462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.689806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.689836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.690200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.690229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.353 [2024-11-15 11:10:17.690627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.353 [2024-11-15 11:10:17.690658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.353 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.690905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.690934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.691186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.691215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.691593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.691624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.691971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.692000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.692363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.692398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.692774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.692805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.693156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.693186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.693548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.693596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.693886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.693915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.694261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.694291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.694558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.694601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.694977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.695007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.695260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.695289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.695677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.695956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.695985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.696346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.696375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.696728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.696760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.697139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.697168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.697525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.697554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.697927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.697956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.698339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.698368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.698611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.698641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.698990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.699022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.699271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.699304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.699651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.699683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.700075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.354 [2024-11-15 11:10:17.700105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.354 qpair failed and we were unable to recover it. 00:29:58.354 [2024-11-15 11:10:17.700463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.700493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.700814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.700846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.701073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.701101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.701390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.701419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.701855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.701889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.702242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.702273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.702575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.702605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.702935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.702966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.703331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.703360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.703721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.703753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.704127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.704158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.704519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.704548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.704914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.704943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.705289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.705319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.705545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.705583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.705958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.705987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.706229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.706261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.706481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.706943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.706983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.707335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.707365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.707696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.707727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.708052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.708083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.708449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.708479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.708863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.708894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.709255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.709285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.709657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.709688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.710054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.710084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.710331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.710359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.710720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.710750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.710965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.710994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.711389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.711418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.711793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.355 [2024-11-15 11:10:17.711823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.355 qpair failed and we were unable to recover it. 00:29:58.355 [2024-11-15 11:10:17.712095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.712124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.712472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.712501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.712866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.712896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.713253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.713283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.713635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.713665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.713998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.714028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.714391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.714421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.714760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.714790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.715135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.715165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.715385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.715414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.715792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.715823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.716176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.716206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.716582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.716613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.716835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.716866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.717218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.717248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.717603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.717635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.717909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.717941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.718307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.718337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.718702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.718732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.718868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.719149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.719178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.719442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.719472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.719910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.719941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.720348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.720377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.720759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.720791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.721183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.721213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.721588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.721626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.721878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.721908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.722162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.722192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.722531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.722575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.722802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.722835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.723203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.356 [2024-11-15 11:10:17.723233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.356 qpair failed and we were unable to recover it. 00:29:58.356 [2024-11-15 11:10:17.723594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.723625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.724009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.724047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.724375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.724774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.724806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.725183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.725213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.725583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.725613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.725978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.726008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.726376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.726404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.726740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.726772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.727103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.727133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.727365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.727394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.727618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.727649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.728024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.728054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.728286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.728314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.728667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.728924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.728953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.729322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.729351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.729765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.729796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.730172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.730201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.730602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.730632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.730986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.731016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.731292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.731320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.731592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.731623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.731991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.732021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.732279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.732308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.732664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.732694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.733060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.733089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.733467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.733497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.733756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.733787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.734150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.734181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.357 [2024-11-15 11:10:17.734550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.357 [2024-11-15 11:10:17.734589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.357 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.734954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.734983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.735348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.735378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.735769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.735799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.736182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.736218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.736596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.736627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.736983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.737014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.737385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.737414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.737838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.737869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.738121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.738153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.738292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.738324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.738596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.738627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.739029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.739058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.739421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.739451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.739660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.739690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.740042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.740072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.740450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.740481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.740822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.740853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.741129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.741157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.741488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.741518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.741799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.741835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.742083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.742113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.742463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.742493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.742724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.742756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.742970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.743001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.743373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.743403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.743556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.743599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.743936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.743965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.744176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.744204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.744456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.744485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.358 [2024-11-15 11:10:17.744866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.358 [2024-11-15 11:10:17.744895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.358 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.745272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.745302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.745691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.745721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.746100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.746129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.746383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.746412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.746727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.746757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.747161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.747191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.747559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.747600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.747844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.747873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.748011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.748043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.748395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.748425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.748786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.748826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.749184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.749214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.749614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.749666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.750061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.750414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.750443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.750797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.750827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.751154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.751184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.751538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.751577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.751734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.751762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.752005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.752037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.752265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.752295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.752632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.752663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.752912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.752942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.753311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.753341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.753728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.753758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.754135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.754164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.754549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.754595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.754981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.755012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.755255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.755283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.755650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.359 [2024-11-15 11:10:17.755680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.359 qpair failed and we were unable to recover it. 00:29:58.359 [2024-11-15 11:10:17.756047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.756076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.756435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.756463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.756825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.756856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.757207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.757238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.757643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.758017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.758046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.758275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.758304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.758663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.758693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.759048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.759077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.759438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.759467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.759690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.759720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.759848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.759876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Write completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 Read completed with error (sct=0, sc=8) 00:29:58.360 starting I/O failed 00:29:58.360 [2024-11-15 11:10:17.760704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.360 [2024-11-15 11:10:17.761064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.761133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.761381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.761412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.761881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.761987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.762397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.762433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.762849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.762882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.763241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.763272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.763582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.763613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.763853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.763882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.764121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.764151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.764425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.764454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.764695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.764726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-11-15 11:10:17.764976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-11-15 11:10:17.765005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.765287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.765324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.765692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.765722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.765964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.766000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.766223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.766252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.766576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.766607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.766975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.767005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.767358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.767388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.767744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.767774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.768175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.768206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.768577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.768607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.768925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.768955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.769317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.769347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.769729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.770040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.770069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.770401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.770432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.770794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.770824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.771046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.771074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.771299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.771328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.771700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.771730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.771939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.771975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.772262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.772291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.772646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.772676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.773046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.773077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.773436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.773465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.773697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.773726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.774099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.774128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.774466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.774495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.774889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.774919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.775067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.775096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.775376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.775405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.775630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.775661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.776110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.776139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.776490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.776519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-11-15 11:10:17.776905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-11-15 11:10:17.776937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.777296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.777325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.777584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.777615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.777986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.778016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.778394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.778424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.778658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.778688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.779061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.779090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.779452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.779482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.779844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.779876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.780095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.780124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.780492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.780521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.780896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.780928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.781140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.781169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.781550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.781590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.781803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.781832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.782049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.782077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.782436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.782465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.782835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.782866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.783304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.783333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.783730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.783761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.784004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.784032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.784381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.784411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.784647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.784676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.785048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.785077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.785335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.785368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.785634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.785663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.786031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.786067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.786476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.786853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.786883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.787256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.787285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-11-15 11:10:17.787639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-11-15 11:10:17.787669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.788013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.788043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.788429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.788458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.788819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.788849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.789210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.789239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.789617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.789647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.789972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.790001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.790318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.790347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.790549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.790588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.790976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.791005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.791378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.791408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.791636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.791666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.791992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.792022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.792393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.792423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.792793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.792822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.793173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.793202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.793542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.793579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.794001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.794030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.794399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.794428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.794653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.794684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.794889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.794917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.795146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.795176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.795515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.795544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.795984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.796014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.796358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.796387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.796740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.796769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.797129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.797158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.797530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.797558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.797804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.797832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.798106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.798135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.798486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.798515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.798916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.798945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.799174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.799206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.799553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-11-15 11:10:17.799608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-11-15 11:10:17.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.800003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.800354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.800383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.800744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.800780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.801036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.801068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.801420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.801449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.801732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.801762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.802136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.802165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.802517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.802546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.802909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.802939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.803302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.803332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.803543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.803596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.803908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.804230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.804258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.804530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.804559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.804911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.804942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.805186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.805215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.805618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.805650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.805909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.805938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.806180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.806213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.806603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.806634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.806953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.806982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.807349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.807378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.807733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.807764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.808134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.808163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.808525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.808555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.808934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.808962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-11-15 11:10:17.809183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-11-15 11:10:17.809211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.809430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.809460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.809660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.809690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.810082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.810112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.810470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.810499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.810849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.810880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.811238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.811268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.811625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.811654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.812024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.812056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.812418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.812447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.812800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.812830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.813188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.813217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.813450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.813479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.813847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.813876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.814115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.814143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.814464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.814493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.814849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.814885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.815263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.815292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.815720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.815750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.816097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.816126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.816490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.816519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.816781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.816810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.817192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.817221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.817581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.817611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.817969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.817998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.818330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.818359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.818723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.818752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.819127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.819516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.819545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.819919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.819949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.820294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.820324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.820590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.820620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.820840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.820870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.821074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.821102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-11-15 11:10:17.821474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-11-15 11:10:17.821503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.821717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.821746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.822100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.822129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.822481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.822509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.822695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.822724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.823125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.823154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.823514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.823543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.823976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.824006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.824356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.824386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.824646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.824677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.825028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.825057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.825416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.825820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.825850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.826218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.826245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.826586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.826617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.826865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.826897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.827238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.827267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.827634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.827663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.827989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.828017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.828394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.828759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.828788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.829130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.829159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.829254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.829288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.829613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.829643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.829916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.829946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.830290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.830319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.830685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.830714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.830949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.830981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.831335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.831364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.831737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.831767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.832132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.832160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.832534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.832572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.832900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.832930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.833285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.833314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.833730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-11-15 11:10:17.833760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-11-15 11:10:17.834112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.834141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.834371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.834399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.834728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.834757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.835108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.835137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.835496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.835525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.835883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.835913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.836263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.836293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.836549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.836587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.836936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.836965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.837199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.837228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.837585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.837614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.837966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.837995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.838354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.838383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.838732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.838761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.839132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.839162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.839516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.839546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.839766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.839795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.840148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.840176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.840533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.840561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.840914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.840943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.841310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.841338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.841549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.841607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.841979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.842009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.842379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.842408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.842785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.842816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.843043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.843076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.843437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.843466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.843677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.843714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.844155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.844185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.844510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.844541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.844909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.844939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.845148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-11-15 11:10:17.845176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-11-15 11:10:17.845405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.845434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.845776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.845806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.846046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.846075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.846456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.846485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.846835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.846865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.847219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.847248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.847602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.847631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.847849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.847877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.848168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.848197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.848571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.848602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.848819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.848847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.849220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.849249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.849648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.849677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.849772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.849799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.850036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.850067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.850329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.850358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.850700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.850729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.850960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.850989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.851329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.851358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.851727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.851756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.852145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.852174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.852601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.852630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.852959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.852989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.853351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.853380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.853724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.853754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.854114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.854142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.854364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.854392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.854557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.854595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.854937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.854965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.855332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.855361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.855602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.855634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-11-15 11:10:17.855870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-11-15 11:10:17.855899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.369 [2024-11-15 11:10:17.856223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-11-15 11:10:17.856251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-11-15 11:10:17.856626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.856659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.856881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.856912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.857287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.857323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.857685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.857716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.858086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.858114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.858477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.858506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.858870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.858900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.859242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.646 [2024-11-15 11:10:17.859272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.646 qpair failed and we were unable to recover it. 00:29:58.646 [2024-11-15 11:10:17.859617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.859650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.859968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.859996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.860358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.860386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.860648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.860681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.861067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.861096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.861461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.861489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.861729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.862112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.862141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.862512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.862542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.862894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.862924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.863149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.863177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.863395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.863427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.863587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.863619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.863854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.863883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.864250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.864280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.864636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.864666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.865055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.865083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.865481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.865510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.865894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.865924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.866276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.866305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.866682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.866711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.866979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.867009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.867344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.867372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.867723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.867754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.868111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.868140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.868406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.868433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.868793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.868823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.869046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.869074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.869304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.869334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.869670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.869700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.870051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.870080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.870426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.870455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.870728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.870761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.871068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.871097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.871315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.871350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.871554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.871593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.871803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.871832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.872202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.647 [2024-11-15 11:10:17.872231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.647 qpair failed and we were unable to recover it. 00:29:58.647 [2024-11-15 11:10:17.872642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.872672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.872911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.872941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.873297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.873326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.873540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.873581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.873978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.874008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.874348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.874378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.874739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.874768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.875142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.875171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.875516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.875546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.875701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.875731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.876082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.876112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.876448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.876477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.876845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.876876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.877227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.877256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.877494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.877522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.877940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.877969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.878323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.878351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.878608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.878637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.879026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.879055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.879399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.879802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.879833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.880199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.880228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.880437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.880465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.880823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.880854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.881209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.881237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.881451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.881480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.881806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.881836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.882188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.882217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.882574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.882605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.882970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.883000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.883352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.883609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.883642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.883890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.883921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.884286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.884314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.884663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.884693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.885069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.885099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.885419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.885456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.885803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.648 [2024-11-15 11:10:17.885833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.648 qpair failed and we were unable to recover it. 00:29:58.648 [2024-11-15 11:10:17.886174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.886203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.886554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.886591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.886830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.886858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.887215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.887244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.887458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.887487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.887847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.887877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.888234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.888263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.888526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.888554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.888941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.888970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.889322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.889350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.889710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.889739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.890091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.890119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.890473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.890503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.890633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.890667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.890803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.890831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.891195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.891225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.891574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.891604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.891824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.891853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.892222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.892251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.892519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.892547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.892965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.893323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.893352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.893692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.893724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.894053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.894082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.894435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.894464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.894828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.894859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.895218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.895246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.895605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.895635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.896017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.896046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.896407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.896436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.896775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.896804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.897163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.897192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.897466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.897495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.897837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.897866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.898226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.898255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.898352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.898381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.898713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.898742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.649 [2024-11-15 11:10:17.898997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.649 [2024-11-15 11:10:17.899029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.649 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.899382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.899412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.899624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.899654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.899870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.899900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.900137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.900166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.900408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.900793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.900822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.901183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.901212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.901403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.901432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.901763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.901792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.902145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.902180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.902380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.902408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.902655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.902687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.903047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.903077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.903417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.903446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.903815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.903845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.904123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.904152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.904491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.904521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.904899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.904929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.905288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.905316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.905660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.905689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.905953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.905982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.906309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.906339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.906681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.906711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.907036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.907065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.907433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.907461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.907851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.907881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.908232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.908261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.908614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.908651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.909012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.909042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.909252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.909280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.909636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.909666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.909884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.909912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.910273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.910302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.910559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.910600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.910961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.910990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.911354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.911382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.911731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.650 [2024-11-15 11:10:17.912158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.650 [2024-11-15 11:10:17.912189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.650 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.912530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.912559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.912847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.912876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.913233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.913262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.913594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.913623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.914033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.914062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.914417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.914446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.914802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.914832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.915204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.915233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.915584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.915614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.915868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.915896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.916130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.916162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.916373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.916402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.916726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.916756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.917094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.917123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.917331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.917360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.917607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.917636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.918006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.918034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.918384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.918413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.918753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.918783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.919161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.919190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.919394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.919422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.919619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.919648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.920015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.920046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.920395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.920423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.920652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.920681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.920936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.920966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.921220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.921252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.921466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.921496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.921844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.921874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.922236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.922270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.651 qpair failed and we were unable to recover it. 00:29:58.651 [2024-11-15 11:10:17.922617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.651 [2024-11-15 11:10:17.922648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.922970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.922999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.923366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.923395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.923732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.923761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.923962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.923992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.924352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.924382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.924644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.924674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.925005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.925040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.925400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.925430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.925790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.925820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.926149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.926179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.926525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.926554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.926802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.926830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.926947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.926979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.927360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.927389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.927727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.927757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.928135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.928164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.928523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.928552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.928918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.928948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.929322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.929352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.929687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.929716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.930091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.930119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.930364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.930392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.930782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.930812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.931169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.931197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.931560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.931604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.931942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.931970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.932316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.932346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.932709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.932740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.933087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.933114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.933470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.933499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.933708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.933738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.652 [2024-11-15 11:10:17.934084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.652 [2024-11-15 11:10:17.934112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.652 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.934375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.934404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.934510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.934538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.934893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.934922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.935278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.935308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.935665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.935695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.936016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.936046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.936399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.936433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.936780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.936811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.937166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.937195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.937554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.937819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.938075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.938104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.938449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.938476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.938891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.938921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.939141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.939169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.939550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.939822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.939850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.940221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.940452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.940484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.940860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.940890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.941238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.941267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.941619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.941649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.941998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.942027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.942303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.942330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.942684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.942713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.942979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.943008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.943347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.943376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.943737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.943769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.944129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.944158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.944512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.944540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.944900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.944931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.945277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.945307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.945652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.945682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.945891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.945920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.946239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.946268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.946630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.946658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.946998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.947027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.653 qpair failed and we were unable to recover it. 00:29:58.653 [2024-11-15 11:10:17.947378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.653 [2024-11-15 11:10:17.947407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.947620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.947649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.948020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.948050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.948383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.948412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.948654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.948686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.949047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.949076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.949278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.949307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.949682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.949713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.950082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.950110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.950490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.950525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.950978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.951007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.951359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.951388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.951737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.951766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.952013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.952045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.952401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.952431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.952801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.952831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.953037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.953066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.953397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.953426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.953770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.953800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.953959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.953988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.954204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.954232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.954580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.954610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.954972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.955001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.955353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.955382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.955775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.955806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.956154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.956183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.956554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.956593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.956807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.956840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.957208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.957237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.957450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.957478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.957824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.957855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.958289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.958319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.958528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.958556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.958767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.958797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.654 [2024-11-15 11:10:17.959007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.654 [2024-11-15 11:10:17.959036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.654 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.959283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.959311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.959749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.959780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.960111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.960141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.960502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.960531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.960748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.960777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.961183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.961577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.961607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.961913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.961942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.962181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.962210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.962559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.962598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.962821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.962850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.963077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.963109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.963469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.963499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.963883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.963913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.964117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.964151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.964366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.964394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.964631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.964661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.964820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.965199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.965228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.965589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.965619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.965944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.965974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.966331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.966361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.966611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.966641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.966968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.966999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.967353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.967382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.967603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.967633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.967842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.967871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.968249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.968278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.968631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.968662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.969002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.969030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.969383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.969412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.969783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.969814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.970062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.970090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.970296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.970325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.970675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.970704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.970950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.655 [2024-11-15 11:10:17.970978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.655 qpair failed and we were unable to recover it. 00:29:58.655 [2024-11-15 11:10:17.971338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.971367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.971717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.971748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.972150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.972179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.972393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.972421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.972788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.972818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.973036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.973064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.973198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.973229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.973584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.973616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.973707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.973736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfb0000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.974196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.974289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.974816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.974910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.975365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.975402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.975776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.975811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.976150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.976180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.976402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.976432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.976817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.976848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.977266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.977296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.977614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.977643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.977956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.977998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.978333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.978364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.978627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.978659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.979018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.979049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.979382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.979411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.979814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.979845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.980058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.980087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.980442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.980473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.980736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.980766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.981133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.981162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.981523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.981552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.981799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.981830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.982090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.982119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.656 qpair failed and we were unable to recover it. 00:29:58.656 [2024-11-15 11:10:17.982465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.656 [2024-11-15 11:10:17.982494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.982858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.982889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.983148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.983178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.983526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.983555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.983810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.983841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.983933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.983964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.984332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.984361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.984704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.984734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.985081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.985111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.985434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.985464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.985666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.985697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.986057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.986087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.986426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.986455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.986873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.986904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.987189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.987223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.987429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.987459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.987913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.987944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.988145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.988174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.988506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.988536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.988890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.988921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.989265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.989294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.989655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.989686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.990012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.990042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.990409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.990439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.990786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.990817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.991183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.991214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.991606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.991636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.991869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.991909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.992290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.992320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.992543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.992583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.992925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.992955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.993299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.993329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.993497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.993526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.993791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.993822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.657 qpair failed and we were unable to recover it. 00:29:58.657 [2024-11-15 11:10:17.994089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.657 [2024-11-15 11:10:17.994118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.994333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.994363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.994499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.994527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.994915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.994946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.995162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.995191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.995531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.995561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.995888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.995918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.996295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.996325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.996546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.996584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.996942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.996972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.997240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.997271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.997603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.997634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.997860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.997894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.998230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.998260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.998478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.998506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.998894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.998926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.999266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.999296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:17.999660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:17.999690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.000060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.000090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.000435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.000466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.000691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.000723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.001458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.001487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.001846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.001878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.002123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.002213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.002242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.002623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.002654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.002868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.002898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.003275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.003304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.003523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.003551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.003806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.003836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.004075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.004108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.004478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.004513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.004769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.004807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.005149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.005178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.005525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.005555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.005904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.005934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.006154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.658 [2024-11-15 11:10:18.006184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.658 qpair failed and we were unable to recover it. 00:29:58.658 [2024-11-15 11:10:18.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.006560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.006788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.006818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.007014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.007048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.007207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.007237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.007575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.007606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.007929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.007958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.008305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.008335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.008710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.008742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.009107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.009137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.009529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.009560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.009794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.009824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.010225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.010255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.010586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.010615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.010946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.010976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.011341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.011370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.011612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.011645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.011881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.011911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.012154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.012183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.012417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.012445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.012801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.012837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.013043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.013073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.013418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.013448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.013616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.013650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.013866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.013896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.014111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.014141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.014361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.014391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.014593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.014623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.014981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.015010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.015229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.015258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.659 [2024-11-15 11:10:18.015604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.659 [2024-11-15 11:10:18.015635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.659 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.015955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.015983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.016319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.016349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.016600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.016634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.016955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.016984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.017240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.017269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.017611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.017648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.018041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.018422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.018768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.018798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.018986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.019014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.019220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.019249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.019802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.019832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.020101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.020341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.020370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.020714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.020745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.021091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.021120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.021370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.021639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.021672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.022066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.022096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.022313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.022342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.022685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.022715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.023053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.023083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.023433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.023462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.023695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.023726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.023876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.023907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.024281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.024311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.024544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.025033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.025062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.025404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.025433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.025641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.025672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.025940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.025968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.026281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.026310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.026428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.026457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.026835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.026866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.026989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.027021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.027260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.660 [2024-11-15 11:10:18.027289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.660 qpair failed and we were unable to recover it. 00:29:58.660 [2024-11-15 11:10:18.027628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.027658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.028040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.028070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.028415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.028444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.028778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.028809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.029177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.029206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.029555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.029597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.029938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.030276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.030545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.030591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.030812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.030842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.031196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.031225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.031561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.031602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.031861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.031894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.032245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.032274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.032651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.032681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.032906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.032935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.033328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.033357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.033714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.033745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.034089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.034118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.034482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.034512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.034690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.034721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.034958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.034989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.035207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.035238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.035574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.035604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.035932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.035961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.036245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.036274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.036516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.036544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.036922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.036952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.661 [2024-11-15 11:10:18.037205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.661 [2024-11-15 11:10:18.037234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.661 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.037587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.037617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.037862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.037891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.038126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.038162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.038408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.038438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.038671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.038700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.039000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.039030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.039372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.039402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.039731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.039761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.040109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.040138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.040495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.040532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.040820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.040851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.041206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.041236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.041472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.041501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.041849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.041880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.042259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.042290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.042639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.042669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.043020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.043050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.043327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.043356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.043693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.043725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.043960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.043995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.044310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.044340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.044526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.044555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.044774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.044803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.045032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.045072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.045447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.045478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.045811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.045842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.046207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.046236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.046456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.046484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.046832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.046863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.047133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.047163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.047529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.047558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.047924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.047953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.048181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.048209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.048558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.048594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.048817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.048846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.049199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.662 [2024-11-15 11:10:18.049227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.662 qpair failed and we were unable to recover it. 00:29:58.662 [2024-11-15 11:10:18.049460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.049488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.049833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.049865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.050210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.050238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.050599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.050629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.050876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.051226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.051255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.051617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.051647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.051978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.052008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.052334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.052364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.052714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.052745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.053014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.053043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.053365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.053394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.053664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.053695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.053940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.054292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.054539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.054578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.054918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.054948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.055270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.055301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.055640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.055670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.055771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.055799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.056087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.056116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.056311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.056340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.056706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.056736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.057069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.057103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.057536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.057575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.057890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.057920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.058288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.058317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.058571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.058602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.058965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.058995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.059334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.059363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.059717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.059748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.060109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.060140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.060440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.060469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.060829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.060859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.061090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.061120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.061474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.061503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.061903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.061934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.663 qpair failed and we were unable to recover it. 00:29:58.663 [2024-11-15 11:10:18.062318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.663 [2024-11-15 11:10:18.062348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.062604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.062639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.062874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.063297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.063327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.063687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.063718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.064075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.064105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.064312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.064341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.064713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.064744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.065057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.065087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.065428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.065457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.065805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.065835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.066191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.066221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.066577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.066607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.066850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.066880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.067213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.067243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.067507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.067537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.067791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.067820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.068079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.068110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.068484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.068514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.068887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.068917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.069280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.069309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.069628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.069659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.070018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.070048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.070412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.070441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.070675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.070704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.071116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.071146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.071484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.071514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.071914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.071944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.072294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.072322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.072635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.072665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.072988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.073018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.073421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.073450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.073672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.073702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.074055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.074084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.074308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.664 [2024-11-15 11:10:18.074336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.664 qpair failed and we were unable to recover it. 00:29:58.664 [2024-11-15 11:10:18.074668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.074698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.074941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.074971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.075321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.075350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.075688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.075717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.075927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.075956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.076204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.076243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.076555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.076594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.076774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.076803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.077135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.077164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.077521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.077551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.077824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.078176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.078206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.078593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.078625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.078945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.078973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.079276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.079603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.079633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.079965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.079994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.080343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.080372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.080720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.080756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.081131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.081161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.081380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.081408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.081535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.081580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.081964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.081993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.082376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.082405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.082620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.082650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.083011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.083041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.083317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.083345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.083759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.083790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.084134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.084163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.084487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.084515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.084883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.084913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.085120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.085149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.085499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.085528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.085768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.085801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.086146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.086175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.086521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.086550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.086968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.665 [2024-11-15 11:10:18.086998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.665 qpair failed and we were unable to recover it. 00:29:58.665 [2024-11-15 11:10:18.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.087370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.087720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.087751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.088109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.088137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.088481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.088515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.088857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.088887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.089213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.089243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.089613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.089644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.090000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.090029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.090253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.090281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.090630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.090661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.091057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.091085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.091324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.091353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.091689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.091719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.092058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.092087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.092444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.092473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.092818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.092848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.093207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.093236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.093581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.093611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.093818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.093847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.094191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.094220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.094540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:10:18.094577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.666 qpair failed and we were unable to recover it. 00:29:58.666 [2024-11-15 11:10:18.094958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.094992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.095210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.095238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.095593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.095623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.095977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.096007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.096347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.096376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.096600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.096630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.096956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.096986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.097332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.097361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.097586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.097615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.097890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.097920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.098265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.098293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.098631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.098661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.098990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.099019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.099356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.099385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.099818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.099849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.100181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.100210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.100423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.100455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.100733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.100763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.100962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.100990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.101337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.101366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.101714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.101744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.102088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.102116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.102453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.102481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.102711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.102743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.103051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.103080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.103286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.103313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.103688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.103719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.104046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.104075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.104428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.104456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.104807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.104837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.105144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.105172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.105520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.105549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.105916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.105945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.106289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.106318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.106680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.106710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.106932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.106960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.107189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.107217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.107552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.107590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.667 qpair failed and we were unable to recover it. 00:29:58.667 [2024-11-15 11:10:18.107931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:10:18.107960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.108180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.108208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.108574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.108611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.108960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.108989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.109350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.109378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.109718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.109748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.109987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.110016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.110387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.110783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.110813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.111182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.111212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.111460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.111491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.111821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.111851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.112069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.112097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.112352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.112383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.112728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.112758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.112999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.113028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.113348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.113679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.113709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.113919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.113947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.114196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.114228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.114579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.114609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.114843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.114872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.115214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.115244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.115594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.115623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.115960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.115989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.116313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.116342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.116689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.116718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.116945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.116973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.117316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.117346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.117703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.117735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.117936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.117965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.118295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.118323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.118640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.118670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.118928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.118957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.119316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.119344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.119684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.119721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.120081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.120110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.120467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:10:18.120496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.668 qpair failed and we were unable to recover it. 00:29:58.668 [2024-11-15 11:10:18.120729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.120759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.120987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.121015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.121354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.121383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.121728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.121758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.122098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.122133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.122333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.122362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.122716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.122745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.123092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.123121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.123262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.123290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.123641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.123670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.124024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.124052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.124419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.124780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.124809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.125199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.125228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.125433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.125461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.125825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.125856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.126226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.126256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.126593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.126623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.127015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.127044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.127379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.127408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.127788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.127818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.128176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.128205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.128569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.128600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.128953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.128984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.129341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.129738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.129769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.130112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.130141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.130499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.130528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.130897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.130927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.131130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.131158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.131404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.131436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.131721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.132034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.132063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.132396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.132426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.132781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.132811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.133155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.133184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.133406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.133434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.133538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.669 [2024-11-15 11:10:18.133580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.669 qpair failed and we were unable to recover it. 00:29:58.669 [2024-11-15 11:10:18.133932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.133961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.134298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.134327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.134686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.134715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.135004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.135033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.135391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.135420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.135684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.135712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.136062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.136097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.136359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.136389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.136732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.136762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.137116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.137492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.137521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.137889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.137920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.138284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.138314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.138624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.138654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.138964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.138993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.139318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.139347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.139761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.139791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.140152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.140181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.140545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.140582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.141011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.141040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.141397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.141427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.141808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.141837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.142165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.142194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.142432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.142460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.142694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.142728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.143051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.143080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.143315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.143343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.143712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.143742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.144097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.144126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.144486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.144515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.144726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.144756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.144913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.144941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.145264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.145296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.670 [2024-11-15 11:10:18.145512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.670 [2024-11-15 11:10:18.145541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.670 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.145903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.145933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.146153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.146198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.146535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.146575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.146903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.146934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.147283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.147316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.147625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.147656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.148028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.148057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.148416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.148445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.148846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.148876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.149086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.149114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.149486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.149515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.149916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.149946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.150277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.150313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.150674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.150704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.150918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.150946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.151157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.151186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.151528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.151557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.151920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.151950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.152165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.152197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.152549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.152590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.152763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.152792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.153117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.153494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.153781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.153812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.154149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.154180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.154512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.154541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.154919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.154949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.155263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.155293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.155532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.155572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.155943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.155973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.156318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.156348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.156699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.156729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.157090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.157120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.671 [2024-11-15 11:10:18.157458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.671 [2024-11-15 11:10:18.157488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.671 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.157860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.157893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.158118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.158147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.158490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.158519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.158812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.158842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.158989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.159018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.159364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.159394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.159797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.159828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.160154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.160184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.160539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.160579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.160879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.160908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.161234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.161263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.161610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.161641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.161849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.161877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.162285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.162314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.162555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.162592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.162816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.162845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.162967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.162995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.163272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.163300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.163636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.163673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.163938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.163967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.164130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.164160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.164390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.164420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.164757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.164787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.165005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.165034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.165302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.165342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.944 [2024-11-15 11:10:18.165706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.944 [2024-11-15 11:10:18.165736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.944 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.166096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.166125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.166462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.166492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.166873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.166903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.167252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.167281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.167501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.167532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.167941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.167973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.168329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.168358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.168534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.168571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.168795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.168824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.169190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.169219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.169576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.169606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.169947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.169977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.170325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.170354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.170444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.170473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa8000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.170693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f7e00 is same with the state(6) to be set 00:29:58.945 [2024-11-15 11:10:18.171331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.171438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.171966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.172063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.172606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.172644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.173003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.173033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.173262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.173292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.173516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.173544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.173886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.173916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.174281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.174311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.174637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.174666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.174786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.175127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.175157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.175552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.175591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.175947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.175977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.176335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.176363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.176603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.176637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.176964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.176993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.177360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.177389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.177762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.177793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.178187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.178217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.178551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.178588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.945 qpair failed and we were unable to recover it. 00:29:58.945 [2024-11-15 11:10:18.178803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.945 [2024-11-15 11:10:18.178832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.179202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.179232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.179450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.179479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.179856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.180197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.180227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.180595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.180626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.181015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.181044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.181393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.181424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.181654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.181687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.182064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.182093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.182428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.182465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.182560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.182601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.182995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.183024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.183357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.183387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.183765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.183795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.184154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.184182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.184519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.184549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.184902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.184932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.185256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.185285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.185645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.185674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.186003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.186031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.186232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.186261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.186608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.186637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.186969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.186998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.187365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.187394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.187723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.187756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.188087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.188116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.188485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.188513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.188841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.188872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.189230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.189259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.189572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.189603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.189798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.189827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.190183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.190212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.190557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.190596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.190920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.190950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.191171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.191199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.191349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.946 [2024-11-15 11:10:18.191378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.946 qpair failed and we were unable to recover it. 00:29:58.946 [2024-11-15 11:10:18.191726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.191757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.191993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.192022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.192249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.192278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.192632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.192664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.193024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.193055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.193403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.193432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.193685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.193714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.193964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.193994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.194336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.194365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.194719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.194749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.195020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.195048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.195392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.195422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.195657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.195687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.195905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.195940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.196292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.196320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.196532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.196570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.196924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.196954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.197310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.197338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.197686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.197716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.197946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.197975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.198182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.198210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.198553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.198590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.198913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.198943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.199203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.199232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.199427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.199456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.199817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.199848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.200206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.200234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.200587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.200618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.200953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.200981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.201321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.201350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.201706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.201736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.202132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.202161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.202527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.202556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.202781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.202809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.203175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.203204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.947 qpair failed and we were unable to recover it. 00:29:58.947 [2024-11-15 11:10:18.203573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.947 [2024-11-15 11:10:18.203603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.203951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.203980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.204223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.204255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.204586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.204618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.204947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.204975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.205237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.205267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.205401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.205430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.205675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.205706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.206036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.206066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.206439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.206468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.206683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.206713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.206999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.207028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.207283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.207629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.207659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.207898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.207926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.208245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.208274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.208444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.208472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.208570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.208599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.208936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.208972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.209173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.209201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.209306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.209338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.209675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.209705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.209964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.209993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.210342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.210372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.210728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.210758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.211093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.211122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.211471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.211499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.211835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.211865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.212226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.212255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.212489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.212517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.212930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.212960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.213322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.213350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.213719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.213749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.214081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.214109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.948 qpair failed and we were unable to recover it. 00:29:58.948 [2024-11-15 11:10:18.214729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.948 [2024-11-15 11:10:18.214759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.215102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.215131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.215498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.215526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.215884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.215914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.216248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.216279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.216657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.216687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.217162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.217423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.217451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.217796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.217826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.218080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.218108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.218472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.218502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.218866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.218896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.219139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.219168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.219571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.219601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.219824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.219853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.220088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.220117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.220347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.220376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.220592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.220621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.220847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.220876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.221156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.221183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.221534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.221584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.221913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.221942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.222401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.222429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.222779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.222816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.223157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.223187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.223537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.223573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.223919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.223947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.224244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.224273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.224635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.224665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.224998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.225027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.225355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.225384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.949 qpair failed and we were unable to recover it. 00:29:58.949 [2024-11-15 11:10:18.225661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.949 [2024-11-15 11:10:18.225691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.226070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.226098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.226459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.226488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.226725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.226755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.227013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.227387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.227417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.227804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.227835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.228053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.228081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.228312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.228341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.228583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.228612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.229001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.229030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.229382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.229411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.229542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.229591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.229976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.230006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.230363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.230393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.230751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.230781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.231130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.231159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.231556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.231594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.231820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.231849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.232248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.232639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.232670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.233051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.233417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.233447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.233703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.233733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.233985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.234363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.234392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.234759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.234789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.235138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.235166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.235493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.235826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.235857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.236257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.236286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.236540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.236580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.236809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.236844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.237066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.237095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.237331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.237361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.237701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.237731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.238078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.950 [2024-11-15 11:10:18.238107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.950 qpair failed and we were unable to recover it. 00:29:58.950 [2024-11-15 11:10:18.238450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.238480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.238850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.238879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.239110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.239139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.239366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.239395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.239658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.239688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.240022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.240051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.240412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.240442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.240784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.241167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.241196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.241604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.241633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.241944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.241974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.242289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.242318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.242667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.242697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.243042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.243071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.243300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.243329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.243663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.243693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.243900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.243929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.244257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.244285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.244630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.244659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.245008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.245038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.245154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.245183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.245531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.245560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.245947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.245978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.246313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.246342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.246686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.247044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.247074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.247434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.247463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.247677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.247706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.248059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.248088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.248410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.248440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.248791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.248822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.249040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.249068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.249413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.249443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.249784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.249814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.250147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.250178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.250526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.250569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.951 [2024-11-15 11:10:18.250950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.951 [2024-11-15 11:10:18.250979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.951 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.251323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.251354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.251587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.251617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.251849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.251877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.252144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.252174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.252506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.252536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.252789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.252819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.253092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.253121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.253459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.253488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.253765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.253795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.254060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.254310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.254343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.254546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.254600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.254831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.254860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.255225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.255254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.255478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.255506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.255744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.255774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.255995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.256024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.256422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.256452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.256789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.256819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.257136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.257166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.257383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.257412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.257781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.257811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.258167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.258196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.258439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.258468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.258803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.258834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.259193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.259223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.259590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.259621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.259980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.260009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.260367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.260396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.260786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.260815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.261162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.261191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.261541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.261588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.261981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.262011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.262332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.262362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.262579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.262609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.262830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.262858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.263224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.952 [2024-11-15 11:10:18.263253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.952 qpair failed and we were unable to recover it. 00:29:58.952 [2024-11-15 11:10:18.263521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.263550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.263812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.263847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.264127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.264155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.264363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.264392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.264735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.264764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.265105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.265135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.265336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.265364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.265584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.265614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.265848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.265877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.266273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.266300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.266402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.266431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.266772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.267139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.267169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.267407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.267438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.267686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.267714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.267962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.267995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.268246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.268276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.268528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.268557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.268921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.268950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.269292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.269322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.269596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.269647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.270017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.270046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.270399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.270429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.270801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.270831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.271201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.271231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.271593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.271623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.271954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.271982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.272345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.272374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.272753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.272783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.272988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.273016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.273242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.273272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.273632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.953 [2024-11-15 11:10:18.273662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.953 qpair failed and we were unable to recover it. 00:29:58.953 [2024-11-15 11:10:18.273855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.273884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.274275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.274304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.274652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.274683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.274927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.274956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.275312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.275341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.275684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.275715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.275863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.275892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.276171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.276200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.276597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.276626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.276986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.277021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.277380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.277640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.277669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.278038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.278069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.278270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.278299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.278570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.278600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.278948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.278976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.279306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.279335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.279688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.279719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.280063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.280480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.280509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.280862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.280893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.281226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.281256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.281474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.281504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.281883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.281914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.282325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.282355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.282454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.282483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.282716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.282748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.283115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.283146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.283344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.283374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.283721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.283751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.284038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.284067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.284420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.284449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.284674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.284704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.284922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.284952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.285188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.285218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.285624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.285654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.286020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.286050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.954 [2024-11-15 11:10:18.286399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.954 [2024-11-15 11:10:18.286429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.954 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.286781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.286811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.287029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.287058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.287384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.287413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.287782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.287812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.288175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.288203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.288553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.288592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.288923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.288953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.289267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.289301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.289634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.289664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.290009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.290039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.290278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.290311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.290636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.290673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.290989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.291020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.291370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.291399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.291769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.291800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.292044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.292072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.292295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.292324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.292515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.292545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.292670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.292698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.292906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.292936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.293266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.293295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.293640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.293670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.294017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.294046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.294385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.294416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.294656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.294688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.294962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.294995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.295316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.295346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.295688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.295718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.296056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.296086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.296435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.296464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.296845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.296874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.297207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.297236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.297456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.297485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.297857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.297887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.298225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.298253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.955 [2024-11-15 11:10:18.298577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.955 [2024-11-15 11:10:18.298606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.955 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.298963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.298992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.299338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.299366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.299732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.299763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.300011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.300039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.300323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.300353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.300704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.300735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.300968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.300999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.301363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.301393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.301618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.301649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.301974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.302004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.302355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.302383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.302591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.302620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.302922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.302951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.303188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.303216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.303569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.303598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.303915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.303955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.304313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.304342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.304691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.304721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.305091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.305120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.305300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.305329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.305552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.305589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.305944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.305973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.306194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.306222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.306464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.306493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.306719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.306748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.307112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.307143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.307457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.307486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.307849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.307879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.308224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.308253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.308495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.308524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.308885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.308915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.309151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.309182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.309527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.309557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.309893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.309922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.310286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.310315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.310500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.310529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.956 [2024-11-15 11:10:18.310854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.956 [2024-11-15 11:10:18.310884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.956 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.311105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.311134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.311376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.311406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.311784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.311815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.311902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.311930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.312251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.312281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.312650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.312680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.312880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.312908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.313276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.313305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.313520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.313549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.313936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.313965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.314341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.314370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.314705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.314735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.314985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.315014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.315333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.315362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.315714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.315744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.316162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.316191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.316512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.316540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.316870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.317249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.317499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.317527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.317916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.317945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.318312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.318341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.318682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.318711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.318948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.318976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.319329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.319359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.319603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.319634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.319979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.320008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.320347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.320377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.320726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.320756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.320953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.320981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.321331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.321360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.321607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.321636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.322023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.322052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.322308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.322336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.957 [2024-11-15 11:10:18.322630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.957 [2024-11-15 11:10:18.322660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.957 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.322990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.323021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.323242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.323270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.323665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.323695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.324046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.324076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.324286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.324314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.324540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.324917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.324946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.325299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.325327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.325671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.325701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.326019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.326048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.326283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.326312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.326632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.326662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.326897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.326928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.327262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.327291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.327643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.327673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.328047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.328076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.328420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.328449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.328798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.328828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.329171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.329200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.329421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.329448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.329693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.329723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.330098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.330126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.330488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.330517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.330732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.330769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.331132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.331161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.331487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.331516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.331900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.331931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.332127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.332155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.332529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.332558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:58.958 [2024-11-15 11:10:18.332934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.332970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:58.958 [2024-11-15 11:10:18.333337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 [2024-11-15 11:10:18.333366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.958 qpair failed and we were unable to recover it. 00:29:58.958 [2024-11-15 11:10:18.333602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.958 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.958 [2024-11-15 11:10:18.333632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.959 [2024-11-15 11:10:18.334011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.334041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.959 [2024-11-15 11:10:18.334254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.334282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.334636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.334665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.335022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.335052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.335406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.335436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.335777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.335806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.336154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.336181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.336424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.336455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.336792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.336821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.337163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.337194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.337542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.337579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.337828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.337856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.338209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.338238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.338464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.338492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.338894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.338924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.339283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.339312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.339547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.339597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.339723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.339754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.339982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.340011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.340354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.340383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.340594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.340626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.340938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.340968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.341307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.341336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.341687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.341717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.342080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.342110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.342330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.342359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.342676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.342706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.343067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.343286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.343314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.343643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.343680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.344013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.344043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.344348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.344378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.344767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.345035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.345065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.959 [2024-11-15 11:10:18.345431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.959 qpair failed and we were unable to recover it. 00:29:58.959 [2024-11-15 11:10:18.345651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.345680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.345977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.346006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.346200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.346229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.346457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.346731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.346761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.347112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.347141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.347483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.347512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.347745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.347775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.348137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.348167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.348527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.348556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.348892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.348921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.349284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.349313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.349430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.349463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.349821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.349851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.350205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.350235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.350583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.350613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.350955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.350984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.351316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.351345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.351678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.351707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.352040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.352070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.352420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.352448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.352800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.352831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.353066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.353099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.353466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.353495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.353900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.353929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.354283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.354311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.354686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.354717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.355061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.355091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.355437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.355466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.355792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.355822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.356173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.356204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.356528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.356558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.356925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.356954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.357328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.357679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.357715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.357917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.357948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.960 qpair failed and we were unable to recover it. 00:29:58.960 [2024-11-15 11:10:18.358183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.960 [2024-11-15 11:10:18.358212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.358454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.358702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.358733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.359107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.359136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.359510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.359539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.359794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.359822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.360168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.360197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.360543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.360588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.360955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.360984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.361345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.361374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.361598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.361629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.361955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.361985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.362344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.362373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.362707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.362737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.363097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.363126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.363344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.363372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.363703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.363732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.364068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.364280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.364310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.364645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.364675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.365024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.365053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.365422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.365452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.365813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.365843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.366058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.366088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.366434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.366464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.366786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.366817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.367006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.367035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.367389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.367419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.367679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.367709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.367927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.367957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.368298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.368328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.368634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.368663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.368968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.368997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.369191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.369220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.369591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.369958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.369987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.961 [2024-11-15 11:10:18.370274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.961 [2024-11-15 11:10:18.370303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.961 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.370523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.370552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.370921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.370957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.371294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.371322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.371650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.371680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.371940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.371968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.372321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.372350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.372605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.372635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.372980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.373009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.373366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.373395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.373624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.373653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.374023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.374053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.374384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.374414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.962 [2024-11-15 11:10:18.374745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.374775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.962 [2024-11-15 11:10:18.375129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.375157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.962 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.962 [2024-11-15 11:10:18.375504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.375534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.375881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.376268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.376297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.376650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.376680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.376903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.376934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.377300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.377329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.377677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.377707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.378032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.378061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.378409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.378438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.378790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.378819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.379163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.379191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.379292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.379322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.379647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.379684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.379905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.379936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.380174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.380202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.380440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.380468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.380680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.380709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.381070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.381099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.381445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.381474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.381837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.382217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.962 [2024-11-15 11:10:18.382246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.962 qpair failed and we were unable to recover it. 00:29:58.962 [2024-11-15 11:10:18.382601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.382632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.383006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.383036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.383387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.383416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.383673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.383702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.384084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.384113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.384457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.384487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.384803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.384832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.384928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.384968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.385368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.385397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.385732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.385762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.386109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.386136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.386479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.386508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.386936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.386966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.387305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.387335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.387682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.387712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.387940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.387968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.388302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.388331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.388572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.388604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.388808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.388838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.389136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.389166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.389512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.389541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.389958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.390248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.390276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.390602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.390632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.390846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.390874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.391241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.391658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.391688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.391853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.391881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.392125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.392161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.392519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.392547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.392902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.392931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.393283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.393317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-15 11:10:18.393637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.963 [2024-11-15 11:10:18.393667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.394036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.394065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.394394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.394424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.394650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.394680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.395021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.395049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.395258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.395289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.395624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.395653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.395949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.395978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.396311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.396340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.396685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.396715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.397066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.397095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.397438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.397467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.397863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.397894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.398243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.398273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.398618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.398648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.398908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.398937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.399138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.399170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.399370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.399399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.399776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.399807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.400161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.400189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.400503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.400532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.400881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.400911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.401264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.401293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.401508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.401537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.401894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.401924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.402258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.402288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.402648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.402679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.402995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.403024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.403375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.403404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.403755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.403785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.404152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.404180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.404540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.404576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.404911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.404940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.405295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.405323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.405681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.405712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.405950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.405981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-15 11:10:18.406171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.964 [2024-11-15 11:10:18.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.406456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.406485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.406724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.406761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.407147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.407182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 Malloc0 00:29:58.965 [2024-11-15 11:10:18.407544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.407582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.407915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.407944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.965 [2024-11-15 11:10:18.408190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.408219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.965 [2024-11-15 11:10:18.408536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.408573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.965 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.965 [2024-11-15 11:10:18.408924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.408953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.409279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.409307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.409514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.409543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.409876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.409906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.410245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.410274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.410610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.410640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.410972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.411001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.411399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.411428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.411781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.411811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.412165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.412192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.412420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.412450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.412804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.412834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.413176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.413206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.413415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.413444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.413847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.413876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.414198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.414226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.414598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.414630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.414729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.965 [2024-11-15 11:10:18.414968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.414996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.415227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.415257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.415614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.415644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.416003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.416039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.416267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.416296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.416557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.416594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.416947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.416977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.417321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.417350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.417733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.417763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.418130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.418160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-15 11:10:18.418498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.965 [2024-11-15 11:10:18.418527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.418813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.418842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.419167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.419196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.419547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.419590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.419805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.420077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.420108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.420448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.420715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.420745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.421083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.421112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.421468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.421498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.421743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.421772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.422121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.422149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.422376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.422404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.422730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.422760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.422968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.422996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.423131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.423159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.423436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.423465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.966 [2024-11-15 11:10:18.423850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.423879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.966 [2024-11-15 11:10:18.424110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.424144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.966 [2024-11-15 11:10:18.424493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.424522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.966 [2024-11-15 11:10:18.424769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.424799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.425074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.425102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.425316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.425346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.425713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.425743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.425986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.426014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.426408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.426437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.426667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.426697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.426931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.426960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.427200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.427228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.427586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.427615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.427816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.427845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.428210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.428238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.428572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.428603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.428925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.428954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.429306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.966 [2024-11-15 11:10:18.429334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.966 qpair failed and we were unable to recover it. 00:29:58.966 [2024-11-15 11:10:18.429673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.429711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.430058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.430087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.430329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.430357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.430739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.430770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.431154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.431183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.431525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.431554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.431787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.431815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.432167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.432196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.432420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.432449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.432786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.432821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.433139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.433168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.433495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.433524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.433855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.433884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.434047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.434078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.434311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.434338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.434538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.434585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.434948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.434977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.435336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.435365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.435487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.435770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.435799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.436139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.436169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.967 [2024-11-15 11:10:18.436514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.436549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.967 [2024-11-15 11:10:18.436911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.436943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.437185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.437214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.437527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.437556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.437892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.437921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.438248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.438277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.438643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.438673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.438988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.439365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.439394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.439625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.439654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.440008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.440037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.440372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.967 [2024-11-15 11:10:18.440400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.967 qpair failed and we were unable to recover it. 00:29:58.967 [2024-11-15 11:10:18.440765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.440795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.441151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.441180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.441584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.441614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.441845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.441874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.442229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.442258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.442603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.442633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.442838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.442867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.443205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.443234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.443586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.443616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.443994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.444022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.444361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.444390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.444647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.444677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.444909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.444937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.445174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.445202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.445441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.445475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.445824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.445853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.446071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.446100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.446432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.446461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.446830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.446859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.447092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.447120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.447491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.447520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.968 [2024-11-15 11:10:18.447789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.447818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.968 [2024-11-15 11:10:18.448151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.448181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.968 [2024-11-15 11:10:18.448415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.448443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.968 [2024-11-15 11:10:18.448847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.448876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.449189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.449219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.449601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.449959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.449988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.450312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.450341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.450558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.450594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.450945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.450974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.451194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.968 [2024-11-15 11:10:18.451222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.968 qpair failed and we were unable to recover it. 00:29:58.968 [2024-11-15 11:10:18.451579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.451609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.451984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.452013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.452358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.452386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.452732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.452763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.453123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.453151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.453503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.453531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.453865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.453896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.454243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.454277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.454490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.454517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.454916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.969 [2024-11-15 11:10:18.454947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdfa4000b90 with addr=10.0.0.2, port=4420 00:29:58.969 qpair failed and we were unable to recover it. 00:29:58.969 [2024-11-15 11:10:18.454991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.231 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.231 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.231 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.231 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.231 [2024-11-15 11:10:18.465606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.465711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.465751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.465771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.465789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.465835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.231 11:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 582467 00:29:59.231 [2024-11-15 11:10:18.475597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.475687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.475722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.475740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.475758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.475795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-15 11:10:18.485619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.485698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.485722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.485741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.485753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.485779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-15 11:10:18.495656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.495719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.495736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.495744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.495752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.495771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-15 11:10:18.505432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.505481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.505495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.505502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.505508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.505522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-15 11:10:18.515602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.515651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.515664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.515672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.515678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.515693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-15 11:10:18.525619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-15 11:10:18.525671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-15 11:10:18.525684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-15 11:10:18.525691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-15 11:10:18.525698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.231 [2024-11-15 11:10:18.525715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-15 11:10:18.535664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.535736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.535749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.535756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.535762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.535776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.545682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.545734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.545747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.545754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.545760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.545774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.555594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.555656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.555670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.555677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.555683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.555697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.565744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.565797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.565810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.565817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.565824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.565838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.575745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.575799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.575813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.575820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.575826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.575840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.585632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.585683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.585696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.585703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.585709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.585723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.595813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.595875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.595909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.595917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.595923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.595946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.605845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.605898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.605912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.605919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.605925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.605940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.615742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.615801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.615817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.615824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.615830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.615844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.625868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.625930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.625943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.625950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.625957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.625970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.635903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.635956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.635969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.635976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.635982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.635996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.645954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.646009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.646022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.646029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.646036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.646050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.656032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.232 [2024-11-15 11:10:18.656086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.232 [2024-11-15 11:10:18.656099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.232 [2024-11-15 11:10:18.656106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.232 [2024-11-15 11:10:18.656116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.232 [2024-11-15 11:10:18.656131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.232 qpair failed and we were unable to recover it. 00:29:59.232 [2024-11-15 11:10:18.665978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.666026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.666040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.666047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.666054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.666069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.675994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.676042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.676056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.676063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.676069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.676083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.686067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.686115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.686128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.686135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.686141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.686155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.696241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.696324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.696337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.696344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.696350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.696364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.706135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.706185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.706199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.706206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.706212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.706226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.716201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.716256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.716269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.716276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.716282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.716296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.726252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.726304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.726318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.726325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.726331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.726345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.736213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.736270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.736283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.736290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.736297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.736310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.746083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.746135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.746152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.746159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.746165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.746179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.233 [2024-11-15 11:10:18.756263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.233 [2024-11-15 11:10:18.756323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.233 [2024-11-15 11:10:18.756336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.233 [2024-11-15 11:10:18.756343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.233 [2024-11-15 11:10:18.756349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.233 [2024-11-15 11:10:18.756363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.233 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.766291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.766342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.766355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.766362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.766368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.766382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.776333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.776388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.776401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.776408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.776414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.776428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.786324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.786374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.786386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.786393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.786404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.786417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.796244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.796296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.796309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.796316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.796322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.796336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.806281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.806333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.806346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.806353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.806359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.806373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.816433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.816489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.816502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.816509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.816515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.816529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.826304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.826359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.826372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.826379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.826385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.826399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.836496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.836553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.836569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.836576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.836582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.836596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.846514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.846569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.846583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.846590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.846596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.846610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.856415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.856466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.856480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.496 [2024-11-15 11:10:18.856486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.496 [2024-11-15 11:10:18.856493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.496 [2024-11-15 11:10:18.856507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.496 qpair failed and we were unable to recover it. 00:29:59.496 [2024-11-15 11:10:18.866531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.496 [2024-11-15 11:10:18.866588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.496 [2024-11-15 11:10:18.866602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.866609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.866615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.866630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.876590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.876641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.876661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.876668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.876674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.876688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.886604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.886655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.886668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.886675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.886681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.886695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.896660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.896717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.896730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.896737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.896743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.896757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.906634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.906691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.906704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.906711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.906717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.906731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.916699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.916751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.916765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.916775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.916783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.916798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.926707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.926756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.926770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.926777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.926783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.926797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.936757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.936815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.936828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.936835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.936841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.936855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.946761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.946811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.946824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.946831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.946837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.946851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.956811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.956864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.956878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.956884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.956891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.956908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.966810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.966861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.966874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.966881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.966888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.966901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.976849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.497 [2024-11-15 11:10:18.976903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.497 [2024-11-15 11:10:18.976916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.497 [2024-11-15 11:10:18.976923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.497 [2024-11-15 11:10:18.976929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.497 [2024-11-15 11:10:18.976944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.497 qpair failed and we were unable to recover it. 00:29:59.497 [2024-11-15 11:10:18.986867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.498 [2024-11-15 11:10:18.986915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.498 [2024-11-15 11:10:18.986928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.498 [2024-11-15 11:10:18.986935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.498 [2024-11-15 11:10:18.986941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.498 [2024-11-15 11:10:18.986954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.498 qpair failed and we were unable to recover it. 00:29:59.498 [2024-11-15 11:10:18.996926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.498 [2024-11-15 11:10:18.996972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.498 [2024-11-15 11:10:18.996985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.498 [2024-11-15 11:10:18.996992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.498 [2024-11-15 11:10:18.996999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.498 [2024-11-15 11:10:18.997012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.498 qpair failed and we were unable to recover it. 00:29:59.498 [2024-11-15 11:10:19.006932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.498 [2024-11-15 11:10:19.006986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.498 [2024-11-15 11:10:19.006999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.498 [2024-11-15 11:10:19.007006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.498 [2024-11-15 11:10:19.007012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.498 [2024-11-15 11:10:19.007026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.498 qpair failed and we were unable to recover it. 00:29:59.498 [2024-11-15 11:10:19.016969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.498 [2024-11-15 11:10:19.017021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.498 [2024-11-15 11:10:19.017034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.498 [2024-11-15 11:10:19.017041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.498 [2024-11-15 11:10:19.017047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.498 [2024-11-15 11:10:19.017060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.498 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.026962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.027009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.760 [2024-11-15 11:10:19.027022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.760 [2024-11-15 11:10:19.027029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.760 [2024-11-15 11:10:19.027035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.760 [2024-11-15 11:10:19.027049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.760 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.037054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.037104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.760 [2024-11-15 11:10:19.037117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.760 [2024-11-15 11:10:19.037123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.760 [2024-11-15 11:10:19.037130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.760 [2024-11-15 11:10:19.037143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.760 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.047070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.047122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.760 [2024-11-15 11:10:19.047135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.760 [2024-11-15 11:10:19.047145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.760 [2024-11-15 11:10:19.047151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.760 [2024-11-15 11:10:19.047165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.760 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.056987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.057043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.760 [2024-11-15 11:10:19.057056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.760 [2024-11-15 11:10:19.057063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.760 [2024-11-15 11:10:19.057069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.760 [2024-11-15 11:10:19.057083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.760 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.067071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.067122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.760 [2024-11-15 11:10:19.067135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.760 [2024-11-15 11:10:19.067142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.760 [2024-11-15 11:10:19.067148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.760 [2024-11-15 11:10:19.067162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.760 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.077146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.077197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.760 [2024-11-15 11:10:19.077210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.760 [2024-11-15 11:10:19.077217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.760 [2024-11-15 11:10:19.077223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.760 [2024-11-15 11:10:19.077237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.760 qpair failed and we were unable to recover it. 00:29:59.760 [2024-11-15 11:10:19.087176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.760 [2024-11-15 11:10:19.087225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.087238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.087245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.087251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.087268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.097202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.097258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.097271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.097278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.097284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.097298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.107211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.107270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.107294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.107303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.107310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.107330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.117251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.117308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.117323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.117330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.117337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.117351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.127285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.127336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.127350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.127357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.127364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.127378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.137313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.137374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.137398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.137407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.137414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.137433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.147305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.147354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.147369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.147377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.147383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.147398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.157343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.157392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.157405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.157412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.157418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.157433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.167392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.167479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.761 [2024-11-15 11:10:19.167492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.761 [2024-11-15 11:10:19.167499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.761 [2024-11-15 11:10:19.167505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.761 [2024-11-15 11:10:19.167520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.761 qpair failed and we were unable to recover it. 00:29:59.761 [2024-11-15 11:10:19.177428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.761 [2024-11-15 11:10:19.177487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.177504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.177511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.177518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.177532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.187425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.187476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.187489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.187496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.187503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.187517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.197475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.197526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.197539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.197546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.197552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.197570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.207473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.207539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.207552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.207559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.207570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.207584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.217541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.217601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.217614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.217621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.217631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.217645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.227523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.227587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.227600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.227607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.227613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.227628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.237608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.237657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.237670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.237677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.237683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.237697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.247656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.247705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.247718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.247725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.247731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.247745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.257639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.257698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.762 [2024-11-15 11:10:19.257711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.762 [2024-11-15 11:10:19.257718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.762 [2024-11-15 11:10:19.257724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.762 [2024-11-15 11:10:19.257738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.762 qpair failed and we were unable to recover it. 00:29:59.762 [2024-11-15 11:10:19.267621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.762 [2024-11-15 11:10:19.267669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.763 [2024-11-15 11:10:19.267682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.763 [2024-11-15 11:10:19.267689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.763 [2024-11-15 11:10:19.267695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.763 [2024-11-15 11:10:19.267710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.763 qpair failed and we were unable to recover it. 00:29:59.763 [2024-11-15 11:10:19.277663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.763 [2024-11-15 11:10:19.277715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.763 [2024-11-15 11:10:19.277729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.763 [2024-11-15 11:10:19.277736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.763 [2024-11-15 11:10:19.277742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:29:59.763 [2024-11-15 11:10:19.277757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.763 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.287735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.287788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.287802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.287809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.287815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.287829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.297814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.297871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.297884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.297891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.297898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.297912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.307751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.307804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.307821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.307828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.307834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.307848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.317783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.317836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.317849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.317856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.317863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.317877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.327860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.327953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.327966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.327973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.327979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.327993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.337878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.337936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.337949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.337956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.337962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.337976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.347859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.347925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.347937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.347944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.347954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.347968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.357925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.357977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.357989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.357996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.358002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.358016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.367944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.367996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.368009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.368016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.368022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.368036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.377962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.378015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.378028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.378035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.378041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.378055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.387970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.024 [2024-11-15 11:10:19.388019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.024 [2024-11-15 11:10:19.388032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.024 [2024-11-15 11:10:19.388039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.024 [2024-11-15 11:10:19.388045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.024 [2024-11-15 11:10:19.388059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 qpair failed and we were unable to recover it. 00:30:00.024 [2024-11-15 11:10:19.398033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.398086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.398100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.398107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.398113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.398127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.408039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.408101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.408114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.408121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.408127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.408141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.418077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.418142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.418155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.418162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.418168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.418181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.428047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.428094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.428107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.428114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.428120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.428134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.438144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.438195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.438211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.438218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.438224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.438237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.448138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.448188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.448200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.448207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.448214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.448227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.458195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.458249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.458262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.458269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.458275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.458289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.468161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.468263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.468278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.468285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.468291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.468308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.478255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.478303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.478317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.478328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.478334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.478348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.488273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.488332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.488356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.488365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.488371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.488391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.498344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.498401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.498415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.498423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.498429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.498444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.508310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.508363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.508376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.508383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.508389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.508404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.518384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.025 [2024-11-15 11:10:19.518436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.025 [2024-11-15 11:10:19.518450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.025 [2024-11-15 11:10:19.518457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.025 [2024-11-15 11:10:19.518463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.025 [2024-11-15 11:10:19.518483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 qpair failed and we were unable to recover it. 00:30:00.025 [2024-11-15 11:10:19.528277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.026 [2024-11-15 11:10:19.528338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.026 [2024-11-15 11:10:19.528351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.026 [2024-11-15 11:10:19.528358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.026 [2024-11-15 11:10:19.528364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.026 [2024-11-15 11:10:19.528378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.026 qpair failed and we were unable to recover it. 00:30:00.026 [2024-11-15 11:10:19.538453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.026 [2024-11-15 11:10:19.538506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.026 [2024-11-15 11:10:19.538519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.026 [2024-11-15 11:10:19.538526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.026 [2024-11-15 11:10:19.538532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.026 [2024-11-15 11:10:19.538546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.026 qpair failed and we were unable to recover it. 00:30:00.026 [2024-11-15 11:10:19.548443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.026 [2024-11-15 11:10:19.548497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.026 [2024-11-15 11:10:19.548511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.026 [2024-11-15 11:10:19.548518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.026 [2024-11-15 11:10:19.548524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.026 [2024-11-15 11:10:19.548538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.026 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.558482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.558568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.558581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.558588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.558594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.558609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.568519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.568611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.568624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.568631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.568637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.568651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.578532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.578591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.578605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.578611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.578618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.578632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.588545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.588598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.588610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.588617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.588624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.588638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.598609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.598658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.598671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.598677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.598684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.598697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.608516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.608571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.608584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.608599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.608606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.608621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.618657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.618725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.618739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.618746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.618752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.618766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.628655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.628704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.628717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.628724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.628730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.628745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.638734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.638792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.638806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.638813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.638819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.638833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-15 11:10:19.648753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-15 11:10:19.648803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-15 11:10:19.648817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-15 11:10:19.648824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-15 11:10:19.648830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.288 [2024-11-15 11:10:19.648849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.658800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.658855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.658868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.658875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.658882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.658896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.668785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.668832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.668845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.668852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.668858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.668872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.678848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.678896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.678909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.678916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.678922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.678936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.688878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.688925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.688938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.688944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.688950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.688964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.698857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.698914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.698927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.698934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.698940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.698954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.708885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.708948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.708961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.708968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.708974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.708988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.718942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.718994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.719007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.719014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.719020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.719034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.728960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.729055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.729068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.729075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.729081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.729095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.739011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.739064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.739080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.739087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.739093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.739107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.748997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.749050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.749063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.749070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.749076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.749090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.759060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.759111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.759124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.759131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.759140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.759154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.769083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.769134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.769147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.289 [2024-11-15 11:10:19.769154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.289 [2024-11-15 11:10:19.769160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.289 [2024-11-15 11:10:19.769174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.289 qpair failed and we were unable to recover it. 00:30:00.289 [2024-11-15 11:10:19.779103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.289 [2024-11-15 11:10:19.779162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.289 [2024-11-15 11:10:19.779177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.290 [2024-11-15 11:10:19.779183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.290 [2024-11-15 11:10:19.779194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.290 [2024-11-15 11:10:19.779209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.290 qpair failed and we were unable to recover it. 00:30:00.290 [2024-11-15 11:10:19.789113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.290 [2024-11-15 11:10:19.789162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.290 [2024-11-15 11:10:19.789176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.290 [2024-11-15 11:10:19.789182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.290 [2024-11-15 11:10:19.789189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.290 [2024-11-15 11:10:19.789204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.290 qpair failed and we were unable to recover it. 00:30:00.290 [2024-11-15 11:10:19.799165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.290 [2024-11-15 11:10:19.799217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.290 [2024-11-15 11:10:19.799230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.290 [2024-11-15 11:10:19.799237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.290 [2024-11-15 11:10:19.799243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.290 [2024-11-15 11:10:19.799257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.290 qpair failed and we were unable to recover it. 00:30:00.290 [2024-11-15 11:10:19.809074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.290 [2024-11-15 11:10:19.809121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.290 [2024-11-15 11:10:19.809134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.290 [2024-11-15 11:10:19.809141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.290 [2024-11-15 11:10:19.809147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.290 [2024-11-15 11:10:19.809161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.290 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.819230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.819288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.819302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.819308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.819315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.819328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.829233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.829293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.829318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.829327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.829333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.829353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.839290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.839345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.839369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.839378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.839385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.839405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.849245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.849301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.849316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.849323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.849330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.849345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.859234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.859287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.859300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.859307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.859313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.859328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.869344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.869392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.869409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.869416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.869423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.869437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.879384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.879443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.879456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.879463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.879470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.879483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.889410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.889459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.889472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.889479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.889485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.889499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.899413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.899465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.899478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.899485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.899491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.552 [2024-11-15 11:10:19.899505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-15 11:10:19.909334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-15 11:10:19.909385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-15 11:10:19.909398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-15 11:10:19.909405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-15 11:10:19.909415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.909429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.919511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.919568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.919582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.919589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.919595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.919609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.929513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.929571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.929584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.929591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.929597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.929611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.939567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.939625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.939639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.939646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.939652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.939670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.949556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.949609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.949623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.949630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.949636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.949651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.959599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.959659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.959672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.959679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.959686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.959700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.969600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.969652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.969665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.969672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.969678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.969692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.979636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.979695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.979708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.979715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.979721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.979736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.989642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.989692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.989705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.989712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.989718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.989733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:19.999677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:19.999733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:19.999746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:19.999753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:19.999759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:19.999774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:20.009764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:20.009820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:20.009835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:20.009842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:20.009848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:20.009863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.553 qpair failed and we were unable to recover it. 00:30:00.553 [2024-11-15 11:10:20.019768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.553 [2024-11-15 11:10:20.019826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.553 [2024-11-15 11:10:20.019839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.553 [2024-11-15 11:10:20.019847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.553 [2024-11-15 11:10:20.019854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.553 [2024-11-15 11:10:20.019868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.554 qpair failed and we were unable to recover it. 00:30:00.554 [2024-11-15 11:10:20.029793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.554 [2024-11-15 11:10:20.029865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.554 [2024-11-15 11:10:20.029878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.554 [2024-11-15 11:10:20.029885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.554 [2024-11-15 11:10:20.029892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.554 [2024-11-15 11:10:20.029906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.554 qpair failed and we were unable to recover it. 00:30:00.554 [2024-11-15 11:10:20.039833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.554 [2024-11-15 11:10:20.039885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.554 [2024-11-15 11:10:20.039900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.554 [2024-11-15 11:10:20.039911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.554 [2024-11-15 11:10:20.039918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.554 [2024-11-15 11:10:20.039934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.554 qpair failed and we were unable to recover it. 00:30:00.554 [2024-11-15 11:10:20.049855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.554 [2024-11-15 11:10:20.049908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.554 [2024-11-15 11:10:20.049921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.554 [2024-11-15 11:10:20.049928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.554 [2024-11-15 11:10:20.049935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.554 [2024-11-15 11:10:20.049950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.554 qpair failed and we were unable to recover it. 00:30:00.554 [2024-11-15 11:10:20.059891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.554 [2024-11-15 11:10:20.059947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.554 [2024-11-15 11:10:20.059960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.554 [2024-11-15 11:10:20.059967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.554 [2024-11-15 11:10:20.059973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.554 [2024-11-15 11:10:20.059987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.554 qpair failed and we were unable to recover it. 00:30:00.554 [2024-11-15 11:10:20.069865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.554 [2024-11-15 11:10:20.069946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.554 [2024-11-15 11:10:20.069959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.554 [2024-11-15 11:10:20.069966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.554 [2024-11-15 11:10:20.069973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.554 [2024-11-15 11:10:20.069987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.554 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.079926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.079979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.079993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.080000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.080006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.080024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.089958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.090010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.090023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.090030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.090036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.090050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.099993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.100049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.100062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.100069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.100076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.100090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.109973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.110025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.110038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.110045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.110052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.110066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.120016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.120070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.120082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.120089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.120096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.120110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.130066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.130152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.130165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.130172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.130179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.130193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-15 11:10:20.140106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-15 11:10:20.140164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-15 11:10:20.140177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-15 11:10:20.140184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-15 11:10:20.140191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.816 [2024-11-15 11:10:20.140205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.150092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.150144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.150158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.150164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.150171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.150185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.160161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.160234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.160247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.160254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.160260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.160274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.170178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.170236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.170260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.170274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.170281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.170301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.180183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.180241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.180256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.180264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.180271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.180286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.190192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.190247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.190272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.190280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.190287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.190307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.200216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.200272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.200296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.200305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.200312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.200332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.210245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.210298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.210322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.210330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.210337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.210362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.220199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.220295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.220311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.220319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.220326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.220341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.230298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.230365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.230390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.230399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.230406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.230426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.240339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.240394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.240409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.240416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.240423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.240438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.250356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.250441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.250455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.250462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.250469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.250484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.260421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.260474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.260487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.260494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.260501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.260515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.817 [2024-11-15 11:10:20.270409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.817 [2024-11-15 11:10:20.270502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.817 [2024-11-15 11:10:20.270515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.817 [2024-11-15 11:10:20.270522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.817 [2024-11-15 11:10:20.270528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.817 [2024-11-15 11:10:20.270543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.817 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.280453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.280501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.280514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.280521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.280527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.280541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.290459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.290505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.290519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.290526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.290532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.290546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.300544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.300644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.300661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.300668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.300674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.300689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.310515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.310567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.310581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.310588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.310594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.310609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.320582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.320664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.320677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.320684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.320691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.320705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.330526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.330575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.330589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.330596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.330602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.330617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:00.818 [2024-11-15 11:10:20.340637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.818 [2024-11-15 11:10:20.340689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.818 [2024-11-15 11:10:20.340702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.818 [2024-11-15 11:10:20.340709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.818 [2024-11-15 11:10:20.340722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:00.818 [2024-11-15 11:10:20.340737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.818 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.350698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.350746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.350760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.350766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.350773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.350788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.360652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.360701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.360714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.360721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.360728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.360744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.370671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.370717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.370730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.370737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.370743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.370758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.380785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.380844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.380857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.380865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.380871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.380885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.390751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.390809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.390822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.390830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.390836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.390851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.400765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.400814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.400827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.400835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.400841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.400855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.410793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.410841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.410854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.410860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.410867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.410881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.420861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-15 11:10:20.420916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-15 11:10:20.420929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-15 11:10:20.420936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-15 11:10:20.420943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.080 [2024-11-15 11:10:20.420956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-15 11:10:20.430726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.430779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.430795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.430802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.430809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.430823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.440869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.440925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.440938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.440945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.440951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.440965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.450887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.450938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.450951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.450958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.450964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.450978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.460953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.461007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.461020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.461027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.461033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.461047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.470949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.471003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.471016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.471023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.471032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.471047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.480977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.481029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.481042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.481049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.481055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.481069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.490989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.491034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.491047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.491053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.491060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.491073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.501056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.501143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.501156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.501164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.501170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.501184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.511062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.511110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.511122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.511129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.511135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.511149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.521068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.521120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.521133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.521139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.521146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.521159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.531134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.531211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.531224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.531231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.531237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.081 [2024-11-15 11:10:20.531250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-15 11:10:20.541172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-15 11:10:20.541226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-15 11:10:20.541240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-15 11:10:20.541247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-15 11:10:20.541253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.541268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-15 11:10:20.551154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-15 11:10:20.551208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-15 11:10:20.551222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-15 11:10:20.551229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-15 11:10:20.551235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.551249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-15 11:10:20.561206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-15 11:10:20.561251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-15 11:10:20.561264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-15 11:10:20.561271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-15 11:10:20.561277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.561291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-15 11:10:20.571236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-15 11:10:20.571326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-15 11:10:20.571338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-15 11:10:20.571345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-15 11:10:20.571352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.571366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-15 11:10:20.581281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-15 11:10:20.581339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-15 11:10:20.581351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-15 11:10:20.581358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-15 11:10:20.581365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.581378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-15 11:10:20.591228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-15 11:10:20.591302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-15 11:10:20.591315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-15 11:10:20.591322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-15 11:10:20.591328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.591342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-15 11:10:20.601270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-15 11:10:20.601320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-15 11:10:20.601334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-15 11:10:20.601345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-15 11:10:20.601351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.082 [2024-11-15 11:10:20.601373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.344 [2024-11-15 11:10:20.611297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.344 [2024-11-15 11:10:20.611342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.344 [2024-11-15 11:10:20.611356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.344 [2024-11-15 11:10:20.611363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.344 [2024-11-15 11:10:20.611369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.344 [2024-11-15 11:10:20.611383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.344 qpair failed and we were unable to recover it. 00:30:01.344 [2024-11-15 11:10:20.621364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.621421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.621434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.621441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.621447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.621461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.631345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.631398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.631411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.631418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.631424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.631438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.641385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.641430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.641443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.641449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.641456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.641473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.651427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.651502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.651514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.651521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.651528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.651541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.661428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.661479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.661492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.661499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.661505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.661519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.671475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.671522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.671535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.671541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.671548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.671565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.681509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.681567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.681580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.681587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.681593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.681607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.691391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.691446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.691460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.691467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.691473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.691488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.701633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.701684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.701697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.701704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.701710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.701724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.711505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.711548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.711566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.711573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.711579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.711594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.721620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.721669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.721683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.721690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.721696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.721713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.731641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.731690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.731704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.731714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.731720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.731734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.741635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-15 11:10:20.741681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-15 11:10:20.741695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-15 11:10:20.741701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-15 11:10:20.741708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.345 [2024-11-15 11:10:20.741722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-15 11:10:20.751655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.751701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.751715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.751722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.751728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.751742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.761577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.761638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.761650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.761657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.761663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.761677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.771692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.771749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.771762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.771769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.771775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.771792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.781749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.781793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.781806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.781813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.781819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.781834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.791808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.791899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.791912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.791919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.791926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.791940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.801791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.801837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.801849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.801856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.801863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.801876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.811853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.811933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.811947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.811954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.811960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.811974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.821821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.821867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.821880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.821887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.821893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.821907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.831903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.831954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.831967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.831974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.831980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.831994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.841910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.841982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.841995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.842002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.842009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.842022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.851962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.852006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.852019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.852026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.852032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.852046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-15 11:10:20.861975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-15 11:10:20.862021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-15 11:10:20.862036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-15 11:10:20.862044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-15 11:10:20.862050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.346 [2024-11-15 11:10:20.862063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.608 [2024-11-15 11:10:20.872008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.608 [2024-11-15 11:10:20.872054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.872067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.872074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.872081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.872094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.882022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.882072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.882084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.882091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.882097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.882111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.892067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.892124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.892136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.892143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.892149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.892163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.902094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.902142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.902155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.902162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.902173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.902187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.912127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.912171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.912185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.912192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.912198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.912212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.922113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.922155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.922167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.922175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.922181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.922195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.932174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.932254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.932266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.932273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.932280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.932294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.942238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.942313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.942326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.942333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.942340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.942353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.952240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.952285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.952298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.952305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.952312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.952325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.962249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.962291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.962303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.962310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.962316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.962330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.972249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.972294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.972307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.972314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.972321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.972335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.982267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.982355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.982368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.982375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.982382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.982395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:20.992347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:20.992423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.609 [2024-11-15 11:10:20.992439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.609 [2024-11-15 11:10:20.992446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.609 [2024-11-15 11:10:20.992453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.609 [2024-11-15 11:10:20.992467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.609 qpair failed and we were unable to recover it. 00:30:01.609 [2024-11-15 11:10:21.002378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.609 [2024-11-15 11:10:21.002445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.002458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.002465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.002472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.002485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.012340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.012382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.012395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.012402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.012408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.012421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.022422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.022471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.022484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.022492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.022498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.022513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.032453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.032501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.032514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.032521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.032531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.032545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.042473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.042530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.042543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.042550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.042557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.042574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.052491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.052554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.052571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.052578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.052584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.052598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.062534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.062586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.062599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.062606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.062612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.062627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.072449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.072496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.072510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.072517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.072523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.072537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.082474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.082521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.082535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.082542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.082549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.082565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.092477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.092522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.092535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.092542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.092548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.092565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.102667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.102711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.102724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.102731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.102737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.102751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.112671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.112720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.112733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.112740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.112746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.112760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.122701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.122750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.610 [2024-11-15 11:10:21.122764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.610 [2024-11-15 11:10:21.122771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.610 [2024-11-15 11:10:21.122777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.610 [2024-11-15 11:10:21.122791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.610 qpair failed and we were unable to recover it. 00:30:01.610 [2024-11-15 11:10:21.132588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.610 [2024-11-15 11:10:21.132636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.611 [2024-11-15 11:10:21.132650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.611 [2024-11-15 11:10:21.132656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.611 [2024-11-15 11:10:21.132663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.611 [2024-11-15 11:10:21.132677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.611 qpair failed and we were unable to recover it. 00:30:01.872 [2024-11-15 11:10:21.142715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.872 [2024-11-15 11:10:21.142782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.872 [2024-11-15 11:10:21.142795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.872 [2024-11-15 11:10:21.142802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.872 [2024-11-15 11:10:21.142808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.872 [2024-11-15 11:10:21.142822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.872 qpair failed and we were unable to recover it. 00:30:01.872 [2024-11-15 11:10:21.152802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.872 [2024-11-15 11:10:21.152847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.872 [2024-11-15 11:10:21.152859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.872 [2024-11-15 11:10:21.152866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.872 [2024-11-15 11:10:21.152873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.872 [2024-11-15 11:10:21.152887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.872 qpair failed and we were unable to recover it. 00:30:01.872 [2024-11-15 11:10:21.162781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.872 [2024-11-15 11:10:21.162826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.162839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.162849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.162856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.162870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.172835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.172884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.172897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.172904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.172910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.172924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.182861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.182909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.182923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.182930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.182936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.182955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.192879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.192927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.192940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.192947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.192953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.192967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.202905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.202949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.202963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.202969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.202976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.202993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.212927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.212968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.212980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.212987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.212993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.213007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.222963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.223009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.223022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.223029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.223035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.223049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.232989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.233045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.233058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.233064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.233071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.233084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.243024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.243102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.243115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.243121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.243128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.243141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.253041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.253092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.253106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.253113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.253119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.253135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.263079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.263139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.263152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.263159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.263165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.263179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.273110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.273156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.273169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.273176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.273182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.873 [2024-11-15 11:10:21.273196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.873 qpair failed and we were unable to recover it. 00:30:01.873 [2024-11-15 11:10:21.283133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.873 [2024-11-15 11:10:21.283177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.873 [2024-11-15 11:10:21.283190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.873 [2024-11-15 11:10:21.283196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.873 [2024-11-15 11:10:21.283203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.283217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.293137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.293176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.293192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.293199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.293206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.293220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.303174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.303221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.303234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.303241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.303247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.303261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.313229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.313277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.313290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.313297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.313303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.313317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.323239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.323284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.323297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.323304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.323310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.323325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.333246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.333285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.333298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.333305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.333311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.333328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.343285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.343336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.343361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.343369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.343376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.343397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.353316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.353370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.353394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.353403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.353410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.353430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.363339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.363383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.363398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.363405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.363412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.363427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.373248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.373292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.373306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.373313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.373319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.373334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.383357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.383405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.383419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.383426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.383432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.383446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:01.874 [2024-11-15 11:10:21.393402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.874 [2024-11-15 11:10:21.393479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.874 [2024-11-15 11:10:21.393492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.874 [2024-11-15 11:10:21.393499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.874 [2024-11-15 11:10:21.393505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:01.874 [2024-11-15 11:10:21.393519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.874 qpair failed and we were unable to recover it. 00:30:02.136 [2024-11-15 11:10:21.403445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.136 [2024-11-15 11:10:21.403488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.136 [2024-11-15 11:10:21.403500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.136 [2024-11-15 11:10:21.403507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.136 [2024-11-15 11:10:21.403514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.136 [2024-11-15 11:10:21.403528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.136 qpair failed and we were unable to recover it. 00:30:02.136 [2024-11-15 11:10:21.413337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.136 [2024-11-15 11:10:21.413380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.136 [2024-11-15 11:10:21.413393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.136 [2024-11-15 11:10:21.413400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.136 [2024-11-15 11:10:21.413406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.136 [2024-11-15 11:10:21.413420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.136 qpair failed and we were unable to recover it. 00:30:02.136 [2024-11-15 11:10:21.423502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.136 [2024-11-15 11:10:21.423615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.423632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.423639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.423646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.423660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.433563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.433612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.433625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.433632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.433638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.433653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.443450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.443495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.443510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.443518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.443524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.443540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.453581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.453629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.453643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.453650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.453657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.453671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.463617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.463667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.463680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.463687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.463697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.463711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.473635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.473688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.473701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.473708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.473714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.473728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.483651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.483710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.483723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.483730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.483737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.483751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.493654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.493704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.493718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.493724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.493733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.493748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.503705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.503748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.503762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.503769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.503776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.503790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.513768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.513819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.513832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.513839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.513846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.513860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.523745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.523792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.523805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.523813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.523819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.523835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.533795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.533841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.533854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.533861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.533867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.533881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.543806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.543850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.137 [2024-11-15 11:10:21.543863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.137 [2024-11-15 11:10:21.543870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.137 [2024-11-15 11:10:21.543877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.137 [2024-11-15 11:10:21.543891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.137 qpair failed and we were unable to recover it. 00:30:02.137 [2024-11-15 11:10:21.553871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.137 [2024-11-15 11:10:21.553915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.553931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.553938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.553944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.553958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.563836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.563882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.563896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.563903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.563909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.563923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.573896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.573942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.573956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.573962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.573969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.573983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.583927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.583976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.583989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.583996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.584003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.584016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.593938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.593988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.594001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.594011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.594017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.594031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.603968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.604014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.604027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.604034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.604040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.604053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.614005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.614048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.614061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.614068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.614074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.614087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.624039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.624091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.624104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.624111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.624117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.624131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.633946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.633997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.634011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.634018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.634024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.634038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.644018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.644061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.644074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.644081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.644087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.644101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.138 [2024-11-15 11:10:21.654081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.138 [2024-11-15 11:10:21.654121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.138 [2024-11-15 11:10:21.654135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.138 [2024-11-15 11:10:21.654142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.138 [2024-11-15 11:10:21.654148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.138 [2024-11-15 11:10:21.654162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.138 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.664148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.664191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.664205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.664212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.664218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.664232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.674149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.674224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.674237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.674244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.674250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.674264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.684205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.684253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.684266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.684273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.684279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.684293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.694225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.694272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.694285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.694292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.694298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.694312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.704260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.704305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.704318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.704324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.704330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.704344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.714287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.714331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.714344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.714351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.714357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.714370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.724179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.724222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.724235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.724245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.724251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.724265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.734199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.734241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.734254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.734261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.734268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.734282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.744365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.744416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.744429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.744436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.744442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.744456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.754408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.754493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.754506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.754513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.754519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.401 [2024-11-15 11:10:21.754533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-11-15 11:10:21.764399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.401 [2024-11-15 11:10:21.764449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.401 [2024-11-15 11:10:21.764462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.401 [2024-11-15 11:10:21.764469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.401 [2024-11-15 11:10:21.764475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.764493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.774455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.774506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.774519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.774526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.774532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.774546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.784468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.784514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.784527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.784534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.784540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.784555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.794528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.794580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.794593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.794600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.794606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.794620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.804522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.804584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.804598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.804606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.804612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.804626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.814541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.814628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.814643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.814650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.814657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.814675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.824589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.824669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.824683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.824690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.824697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.824711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.834581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.834669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.834682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.834689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.834695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.834709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.844638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.844680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.844693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.844700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.844707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.844721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.854604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.854669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.854685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.854692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.854698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.854713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.864665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.864724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.864738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.864744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.864751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.864765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.874699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.874750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.874763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.874770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.874776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.874790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.884721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.884762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.884775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.884782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.884788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.884802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.894742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.894807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.894820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.894826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.894833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.894853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.904799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.904842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.904855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.904862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.904868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.904882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.914839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.914895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.914910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.914917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.914925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.914942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-11-15 11:10:21.924827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.402 [2024-11-15 11:10:21.924875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.402 [2024-11-15 11:10:21.924890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.402 [2024-11-15 11:10:21.924898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.402 [2024-11-15 11:10:21.924906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.402 [2024-11-15 11:10:21.924922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.664 [2024-11-15 11:10:21.934877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.664 [2024-11-15 11:10:21.934920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.664 [2024-11-15 11:10:21.934933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.664 [2024-11-15 11:10:21.934940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.664 [2024-11-15 11:10:21.934946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.664 [2024-11-15 11:10:21.934960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:21.944902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:21.944945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:21.944958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:21.944965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:21.944972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:21.944986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:21.954941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:21.954992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:21.955005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:21.955011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:21.955018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:21.955031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:21.964820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:21.964862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:21.964877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:21.964884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:21.964890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:21.964904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:21.974865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:21.974924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:21.974936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:21.974943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:21.974950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:21.974964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:21.985012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:21.985063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:21.985080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:21.985087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:21.985094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:21.985108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:21.995038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:21.995126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:21.995139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:21.995146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:21.995152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:21.995166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.005028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.005079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.005092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.005099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.005105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:22.005119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.015082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.015128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.015141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.015147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.015153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:22.015167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.025108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.025155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.025168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.025175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.025185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:22.025199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.035149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.035237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.035249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.035256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.035262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:22.035277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.045159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.045205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.045218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.045225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.045231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:22.045245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.055162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.055223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.055236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.055243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.055249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.665 [2024-11-15 11:10:22.055263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.665 qpair failed and we were unable to recover it. 00:30:02.665 [2024-11-15 11:10:22.065214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.665 [2024-11-15 11:10:22.065259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.665 [2024-11-15 11:10:22.065276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.665 [2024-11-15 11:10:22.065283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.665 [2024-11-15 11:10:22.065289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.065305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.075255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.075303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.075316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.075323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.075329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.075344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.085234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.085284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.085297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.085304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.085310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.085324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.095294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.095339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.095352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.095359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.095365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.095379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.105288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.105333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.105346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.105353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.105360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.105373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.115357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.115419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.115435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.115442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.115448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.115462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.125383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.125424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.125437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.125444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.125450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.125464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.135372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.135417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.135430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.135437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.135443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.135457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.145437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.145480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.145493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.145500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.145506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.145520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.155455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.155509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.155522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.155533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.155539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.155553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.165480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.165526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.165539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.165547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.165553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.165571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.175394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.175442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.175457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.175464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.175470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.175485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.666 [2024-11-15 11:10:22.185538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.666 [2024-11-15 11:10:22.185593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.666 [2024-11-15 11:10:22.185607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.666 [2024-11-15 11:10:22.185614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.666 [2024-11-15 11:10:22.185620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.666 [2024-11-15 11:10:22.185635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.666 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.195574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.195621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.195633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.195640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.195647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.195661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.205457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.205500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.205513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.205520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.205526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.205540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.215612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.215658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.215671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.215677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.215684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.215698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.225641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.225685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.225698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.225705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.225711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.225725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.235702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.235751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.235765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.235772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.235778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.235792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.245569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.245672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.245685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.245692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.245698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.245712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.255708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.255753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.255766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.255772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.255778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.255792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.265742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.265790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.930 [2024-11-15 11:10:22.265803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.930 [2024-11-15 11:10:22.265810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.930 [2024-11-15 11:10:22.265816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.930 [2024-11-15 11:10:22.265830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.930 qpair failed and we were unable to recover it. 00:30:02.930 [2024-11-15 11:10:22.275796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.930 [2024-11-15 11:10:22.275845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.275858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.275864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.275871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.275884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.285794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.285868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.285881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.285891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.285897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.285911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.295803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.295848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.295861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.295868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.295874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.295888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.305861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.305907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.305920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.305926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.305933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.305947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.315897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.315947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.315959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.315966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.315972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.315986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.325941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.326033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.326047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.326054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.326061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.326078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.335928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.335983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.335997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.336003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.336010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.336027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.345967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.346012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.346026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.346033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.346039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.346053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.356005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.356050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.356063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.356070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.356076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.356090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.366004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.366045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.366058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.366065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.366071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.366085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.376049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.376095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.376108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.376114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.376121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.376134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.386070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.931 [2024-11-15 11:10:22.386166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.931 [2024-11-15 11:10:22.386179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.931 [2024-11-15 11:10:22.386186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.931 [2024-11-15 11:10:22.386192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.931 [2024-11-15 11:10:22.386206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.931 qpair failed and we were unable to recover it. 00:30:02.931 [2024-11-15 11:10:22.396114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.932 [2024-11-15 11:10:22.396163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.932 [2024-11-15 11:10:22.396176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.932 [2024-11-15 11:10:22.396183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.932 [2024-11-15 11:10:22.396189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.932 [2024-11-15 11:10:22.396203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.932 qpair failed and we were unable to recover it. 00:30:02.932 [2024-11-15 11:10:22.406128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.932 [2024-11-15 11:10:22.406172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.932 [2024-11-15 11:10:22.406185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.932 [2024-11-15 11:10:22.406191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.932 [2024-11-15 11:10:22.406198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.932 [2024-11-15 11:10:22.406211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.932 qpair failed and we were unable to recover it. 00:30:02.932 [2024-11-15 11:10:22.416129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.932 [2024-11-15 11:10:22.416172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.932 [2024-11-15 11:10:22.416189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.932 [2024-11-15 11:10:22.416196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.932 [2024-11-15 11:10:22.416203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.932 [2024-11-15 11:10:22.416216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.932 qpair failed and we were unable to recover it. 00:30:02.932 [2024-11-15 11:10:22.426182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.932 [2024-11-15 11:10:22.426231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.932 [2024-11-15 11:10:22.426244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.932 [2024-11-15 11:10:22.426251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.932 [2024-11-15 11:10:22.426257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.932 [2024-11-15 11:10:22.426271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.932 qpair failed and we were unable to recover it. 00:30:02.932 [2024-11-15 11:10:22.436230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.932 [2024-11-15 11:10:22.436280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.932 [2024-11-15 11:10:22.436293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.932 [2024-11-15 11:10:22.436300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.932 [2024-11-15 11:10:22.436306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.932 [2024-11-15 11:10:22.436320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.932 qpair failed and we were unable to recover it. 00:30:02.932 [2024-11-15 11:10:22.446225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.932 [2024-11-15 11:10:22.446267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.932 [2024-11-15 11:10:22.446280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.932 [2024-11-15 11:10:22.446287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.932 [2024-11-15 11:10:22.446293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:02.932 [2024-11-15 11:10:22.446307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.932 qpair failed and we were unable to recover it. 00:30:03.193 [2024-11-15 11:10:22.456131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.193 [2024-11-15 11:10:22.456178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.193 [2024-11-15 11:10:22.456191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.193 [2024-11-15 11:10:22.456198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.193 [2024-11-15 11:10:22.456208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.193 [2024-11-15 11:10:22.456222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.193 qpair failed and we were unable to recover it. 00:30:03.193 [2024-11-15 11:10:22.466304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.193 [2024-11-15 11:10:22.466348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.466361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.466368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.466374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.466388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.476323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.476373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.476397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.476405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.476412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.476432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.486349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.486394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.486409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.486416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.486423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.486438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.496370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.496415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.496429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.496436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.496443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.496457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.506307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.506388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.506401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.506408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.506415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.506429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.516404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.516454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.516467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.516474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.516481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.516494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.526387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.526449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.526462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.526469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.526475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.526489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.536499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.536544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.536557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.536567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.536574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.536588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.546489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.546583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.546599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.546606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.546613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.546627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.556559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.556610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.556624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.556631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.556637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.556652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.566437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.566485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.566498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.566505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.566511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.566526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.576598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.576646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.576660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.576667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.576673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.576688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.586496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.194 [2024-11-15 11:10:22.586541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.194 [2024-11-15 11:10:22.586554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.194 [2024-11-15 11:10:22.586565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.194 [2024-11-15 11:10:22.586579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.194 [2024-11-15 11:10:22.586594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.194 qpair failed and we were unable to recover it. 00:30:03.194 [2024-11-15 11:10:22.596695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.596743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.596756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.596762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.596769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.596783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.606659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.606702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.606715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.606722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.606728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.606742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.616706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.616750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.616763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.616770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.616776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.616790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.626716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.626764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.626777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.626783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.626790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.626803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.636779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.636827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.636840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.636847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.636853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.636867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.646804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.646885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.646898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.646904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.646910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.646924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.656807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.656849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.656862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.656869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.656875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.656889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.666837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.666931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.666944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.666950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.666957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.666971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.676885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.676932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.676947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.676955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.676961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.676975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.686892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.686975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.686989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.686996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.687002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.687018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.696919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.696963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.696976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.696983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.696989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.697003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.706945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.706996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.707015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.707022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.707028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.707047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.195 [2024-11-15 11:10:22.716970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.195 [2024-11-15 11:10:22.717014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.195 [2024-11-15 11:10:22.717027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.195 [2024-11-15 11:10:22.717037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.195 [2024-11-15 11:10:22.717044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.195 [2024-11-15 11:10:22.717058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.195 qpair failed and we were unable to recover it. 00:30:03.457 [2024-11-15 11:10:22.726964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.457 [2024-11-15 11:10:22.727013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.457 [2024-11-15 11:10:22.727026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.457 [2024-11-15 11:10:22.727033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.457 [2024-11-15 11:10:22.727039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.457 [2024-11-15 11:10:22.727053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.457 qpair failed and we were unable to recover it. 00:30:03.457 [2024-11-15 11:10:22.737015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.457 [2024-11-15 11:10:22.737071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.457 [2024-11-15 11:10:22.737084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.457 [2024-11-15 11:10:22.737091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.457 [2024-11-15 11:10:22.737098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.457 [2024-11-15 11:10:22.737111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.457 qpair failed and we were unable to recover it. 00:30:03.457 [2024-11-15 11:10:22.747055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.457 [2024-11-15 11:10:22.747100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.747113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.747120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.747126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.747140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.757065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.757111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.757124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.757131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.757137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.757151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.767072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.767113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.767126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.767133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.767139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.767153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.777138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.777185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.777198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.777205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.777211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.777225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.787161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.787204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.787218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.787225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.787231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.787245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.797183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.797270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.797283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.797290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.797296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.797310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.807207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.807257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.807270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.807277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.807283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.807297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.817243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.817327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.817340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.817347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.817353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.817367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.827244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.827288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.827301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.827308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.827314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.827328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.837309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.837359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.837372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.837379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.837385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.837398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.847320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.847377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.847389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.847400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.847406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.847420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.857351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.857397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.857409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.857416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.857423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.857436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.867382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.867427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.458 [2024-11-15 11:10:22.867440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.458 [2024-11-15 11:10:22.867447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.458 [2024-11-15 11:10:22.867453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.458 [2024-11-15 11:10:22.867467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.458 qpair failed and we were unable to recover it. 00:30:03.458 [2024-11-15 11:10:22.877419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.458 [2024-11-15 11:10:22.877476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.877488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.877495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.877501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.877515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.887434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.887502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.887515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.887522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.887528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.887546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.897424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.897497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.897510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.897517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.897523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.897536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.907481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.907544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.907557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.907567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.907574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.907588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.917513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.917582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.917595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.917602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.917608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.917622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.927536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.927581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.927594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.927601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.927608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.927621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.937438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.937482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.937495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.937502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.937508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.937522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.947599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.947642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.947655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.947662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.947668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.947682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.957630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.957679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.957693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.957700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.957706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.957720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.967693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.967774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.967788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.967795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.967801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.967820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.459 [2024-11-15 11:10:22.977546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.459 [2024-11-15 11:10:22.977593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.459 [2024-11-15 11:10:22.977610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.459 [2024-11-15 11:10:22.977617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.459 [2024-11-15 11:10:22.977623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.459 [2024-11-15 11:10:22.977638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.459 qpair failed and we were unable to recover it. 00:30:03.722 [2024-11-15 11:10:22.987687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.722 [2024-11-15 11:10:22.987736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.722 [2024-11-15 11:10:22.987749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.722 [2024-11-15 11:10:22.987756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.722 [2024-11-15 11:10:22.987762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.722 [2024-11-15 11:10:22.987776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.722 qpair failed and we were unable to recover it. 00:30:03.722 [2024-11-15 11:10:22.997620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.722 [2024-11-15 11:10:22.997666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.722 [2024-11-15 11:10:22.997680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.722 [2024-11-15 11:10:22.997687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.722 [2024-11-15 11:10:22.997693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.722 [2024-11-15 11:10:22.997708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.722 qpair failed and we were unable to recover it. 00:30:03.722 [2024-11-15 11:10:23.007767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.722 [2024-11-15 11:10:23.007809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.722 [2024-11-15 11:10:23.007823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.722 [2024-11-15 11:10:23.007830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.722 [2024-11-15 11:10:23.007836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.722 [2024-11-15 11:10:23.007850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.722 qpair failed and we were unable to recover it. 00:30:03.722 [2024-11-15 11:10:23.017791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.017834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.017846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.017854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.017863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.017877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.027816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.027861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.027874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.027881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.027887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.027901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.037925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.038011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.038024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.038031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.038037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.038051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.047868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.047914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.047926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.047933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.047939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.047953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.057889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.057928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.057941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.057948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.057954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.057968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.067913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.067961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.067974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.067980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.067987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.068000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.077952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.077996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.078009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.078016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.078022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.078036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.087965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.088057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.088070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.088077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.088083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.088097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.098018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.098064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.098077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.098084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.098091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.098104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.107985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.108056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.108072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.108079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.108086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.723 [2024-11-15 11:10:23.108099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.723 qpair failed and we were unable to recover it. 00:30:03.723 [2024-11-15 11:10:23.117932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.723 [2024-11-15 11:10:23.117980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.723 [2024-11-15 11:10:23.117993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.723 [2024-11-15 11:10:23.118000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.723 [2024-11-15 11:10:23.118006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.118020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.128086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.128174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.128187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.128194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.128200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.128214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.138115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.138206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.138219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.138226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.138232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.138246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.148001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.148044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.148057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.148064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.148074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.148088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.158174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.158219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.158232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.158238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.158244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.158258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.168195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.168278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.168290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.168297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.168304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.168318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.178214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.178261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.178274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.178281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.178287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.178300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.188110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.188156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.188169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.188176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.188182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.188196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.198232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.198281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.198294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.198301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.198307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.198321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.208306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.208353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.208377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.208385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.208393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.208412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.218332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.218416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.218440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.218449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.218456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.218476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.228361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.228411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.724 [2024-11-15 11:10:23.228426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.724 [2024-11-15 11:10:23.228433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.724 [2024-11-15 11:10:23.228439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.724 [2024-11-15 11:10:23.228455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.724 qpair failed and we were unable to recover it. 00:30:03.724 [2024-11-15 11:10:23.238426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.724 [2024-11-15 11:10:23.238473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.725 [2024-11-15 11:10:23.238491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.725 [2024-11-15 11:10:23.238498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.725 [2024-11-15 11:10:23.238504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.725 [2024-11-15 11:10:23.238519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.725 qpair failed and we were unable to recover it. 00:30:03.986 [2024-11-15 11:10:23.248397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.248438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.248451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.248458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.248465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.248479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.258406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.258447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.258460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.258467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.258473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.258488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.268446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.268501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.268514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.268521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.268527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.268541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.278495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.278545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.278558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.278577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.278584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.278598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.288509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.288556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.288573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.288580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.288586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.288600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.298496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.298543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.298556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.298566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.298573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.298587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.308607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.308662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.308674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.308681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.308687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.308701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.318613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.318691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.318704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.318711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.318717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.318735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.328524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.328573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.328586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.328593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.328599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.328613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.338676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.338735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.338748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.338755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.338762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.338776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.348677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.348763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.348776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.348783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.348789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.348803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.358588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.358636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.358651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.358658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.358664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.987 [2024-11-15 11:10:23.358679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-15 11:10:23.368594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.987 [2024-11-15 11:10:23.368643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.987 [2024-11-15 11:10:23.368657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.987 [2024-11-15 11:10:23.368665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.987 [2024-11-15 11:10:23.368671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.368685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.378736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.378780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.378793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.378800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.378807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.378821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.388790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.388836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.388849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.388856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.388862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.388876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.398795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.398841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.398854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.398860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.398867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.398881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.408830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.408874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.408887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.408897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.408904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.408918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.418849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.418891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.418904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.418911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.418917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.418931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.428770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.428818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.428831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.428838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.428844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.428858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.438925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.438974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.438986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.438993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.438999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.439013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.448952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.449022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.449035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.449042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.449048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.449065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.458938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.458980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.458993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.459000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.459006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.459020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.468997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.469051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.469064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.469070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.469076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.469090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.479011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.479059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.479072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.479079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.479085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.479099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.489054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.489138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.489152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.988 [2024-11-15 11:10:23.489158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.988 [2024-11-15 11:10:23.489164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.988 [2024-11-15 11:10:23.489179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-15 11:10:23.499076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.988 [2024-11-15 11:10:23.499122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.988 [2024-11-15 11:10:23.499135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-15 11:10:23.499142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-15 11:10:23.499148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.989 [2024-11-15 11:10:23.499162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-15 11:10:23.509111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.989 [2024-11-15 11:10:23.509159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.989 [2024-11-15 11:10:23.509171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.989 [2024-11-15 11:10:23.509178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.989 [2024-11-15 11:10:23.509184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:03.989 [2024-11-15 11:10:23.509198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.989 qpair failed and we were unable to recover it. 00:30:04.250 [2024-11-15 11:10:23.519157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.250 [2024-11-15 11:10:23.519235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.250 [2024-11-15 11:10:23.519248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.519255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.519261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.519275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.529117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.529178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.529191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.529198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.529204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.529218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.539191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.539241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.539257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.539264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.539270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.539284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.549216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.549261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.549274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.549281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.549287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.549300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.559260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.559307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.559320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.559327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.559334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.559348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.569272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.569319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.569332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.569339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.569345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.569359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.579289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.579330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.579343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.579350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.579360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.579374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.589339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.589405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.589418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.589425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.589431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.589445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.599308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.599355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.599368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.599375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.599381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.599395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.609364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.609414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.609427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.609434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.609440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.609454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.619402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.619452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.619465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.619472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.619478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.619492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.629303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.629350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.629365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.629372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.629378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.629393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.639458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.639504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.251 [2024-11-15 11:10:23.639518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.251 [2024-11-15 11:10:23.639525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.251 [2024-11-15 11:10:23.639532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.251 [2024-11-15 11:10:23.639546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.251 qpair failed and we were unable to recover it. 00:30:04.251 [2024-11-15 11:10:23.649486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.251 [2024-11-15 11:10:23.649528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.649541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.649548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.649554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.649580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.659505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.659552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.659569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.659576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.659582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.659597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.669536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.669584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.669601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.669607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.669614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.669628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.679598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.679648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.679661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.679668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.679675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.679689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.689587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.689641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.689654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.689660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.689667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.689681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.699599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.699648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.699661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.699668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.699674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.699688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.709511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.709555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.709572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.709579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.709589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.709603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.719668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.719717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.719731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.719737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.719744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.719758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.729689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.729738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.729751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.729758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.729764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.729778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.739712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.739764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.739779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.739785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.739792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.739808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.749741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.749784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.749798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.749805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.749811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.749825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.759781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.759829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.759842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.759849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.759855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfa4000b90 00:30:04.252 [2024-11-15 11:10:23.759869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.252 [2024-11-15 11:10:23.769790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.252 [2024-11-15 11:10:23.769896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.252 [2024-11-15 11:10:23.769958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.252 [2024-11-15 11:10:23.769983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.252 [2024-11-15 11:10:23.770004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfb0000b90 00:30:04.252 [2024-11-15 11:10:23.770061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:04.252 qpair failed and we were unable to recover it. 00:30:04.514 [2024-11-15 11:10:23.779844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.514 [2024-11-15 11:10:23.779943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.514 [2024-11-15 11:10:23.779971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.514 [2024-11-15 11:10:23.779986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.514 [2024-11-15 11:10:23.779999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdfb0000b90 00:30:04.514 [2024-11-15 11:10:23.780029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:04.514 qpair failed and we were unable to recover it. 00:30:04.514 [2024-11-15 11:10:23.780258] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:04.514 A controller has encountered a failure and is being reset. 00:30:04.514 [2024-11-15 11:10:23.780368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f7e00 (9): Bad file descriptor 00:30:04.514 Controller properly reset. 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Write completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Write completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Read completed with error (sct=0, sc=8) 00:30:04.514 starting I/O failed 00:30:04.514 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Read completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Read completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Read completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Read completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Write completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Read completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 Read completed with error (sct=0, sc=8) 00:30:04.515 starting I/O failed 00:30:04.515 [2024-11-15 11:10:23.841004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.515 Initializing NVMe Controllers 00:30:04.515 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:04.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:04.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:04.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:04.515 Initialization complete. Launching workers. 00:30:04.515 Starting thread on core 1 00:30:04.515 Starting thread on core 2 00:30:04.515 Starting thread on core 3 00:30:04.515 Starting thread on core 0 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:04.515 00:30:04.515 real 0m11.471s 00:30:04.515 user 0m22.078s 00:30:04.515 sys 0m3.785s 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.515 ************************************ 00:30:04.515 END TEST nvmf_target_disconnect_tc2 00:30:04.515 ************************************ 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.515 rmmod nvme_tcp 00:30:04.515 rmmod nvme_fabrics 00:30:04.515 rmmod nvme_keyring 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 583152 ']' 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 583152 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 583152 ']' 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 583152 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:04.515 11:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 583152 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 583152' 00:30:04.777 killing process with pid 583152 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 583152 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 583152 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.777 11:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.325 11:10:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.325 00:30:07.325 real 0m21.918s 00:30:07.325 user 0m50.152s 00:30:07.325 sys 0m9.900s 00:30:07.326 11:10:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:07.326 11:10:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:07.326 ************************************ 00:30:07.326 END TEST nvmf_target_disconnect 00:30:07.326 ************************************ 00:30:07.326 11:10:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:07.326 00:30:07.326 real 6m34.061s 00:30:07.326 user 11m20.757s 00:30:07.326 sys 2m16.156s 00:30:07.326 11:10:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:07.326 11:10:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.326 ************************************ 00:30:07.326 END TEST nvmf_host 00:30:07.326 ************************************ 00:30:07.326 11:10:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:07.326 11:10:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:07.326 11:10:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:07.326 11:10:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:07.326 11:10:26 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:07.326 11:10:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.326 ************************************ 00:30:07.326 START TEST nvmf_target_core_interrupt_mode 00:30:07.326 ************************************ 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:07.326 * Looking for test storage... 00:30:07.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.326 --rc genhtml_branch_coverage=1 00:30:07.326 --rc genhtml_function_coverage=1 00:30:07.326 --rc genhtml_legend=1 00:30:07.326 --rc geninfo_all_blocks=1 00:30:07.326 --rc geninfo_unexecuted_blocks=1 00:30:07.326 00:30:07.326 ' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.326 --rc genhtml_branch_coverage=1 00:30:07.326 --rc genhtml_function_coverage=1 00:30:07.326 --rc genhtml_legend=1 00:30:07.326 --rc geninfo_all_blocks=1 00:30:07.326 --rc geninfo_unexecuted_blocks=1 00:30:07.326 00:30:07.326 ' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.326 --rc genhtml_branch_coverage=1 00:30:07.326 --rc genhtml_function_coverage=1 00:30:07.326 --rc genhtml_legend=1 00:30:07.326 --rc geninfo_all_blocks=1 00:30:07.326 --rc geninfo_unexecuted_blocks=1 00:30:07.326 00:30:07.326 ' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.326 --rc genhtml_branch_coverage=1 00:30:07.326 --rc genhtml_function_coverage=1 00:30:07.326 --rc genhtml_legend=1 00:30:07.326 --rc geninfo_all_blocks=1 00:30:07.326 --rc geninfo_unexecuted_blocks=1 00:30:07.326 00:30:07.326 ' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.326 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:07.327 ************************************ 00:30:07.327 START TEST nvmf_abort 00:30:07.327 ************************************ 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:07.327 * Looking for test storage... 00:30:07.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:07.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.327 --rc genhtml_branch_coverage=1 00:30:07.327 --rc genhtml_function_coverage=1 00:30:07.327 --rc genhtml_legend=1 00:30:07.327 --rc geninfo_all_blocks=1 00:30:07.327 --rc geninfo_unexecuted_blocks=1 00:30:07.327 00:30:07.327 ' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:07.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.327 --rc genhtml_branch_coverage=1 00:30:07.327 --rc genhtml_function_coverage=1 00:30:07.327 --rc genhtml_legend=1 00:30:07.327 --rc geninfo_all_blocks=1 00:30:07.327 --rc geninfo_unexecuted_blocks=1 00:30:07.327 00:30:07.327 ' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:07.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.327 --rc genhtml_branch_coverage=1 00:30:07.327 --rc genhtml_function_coverage=1 00:30:07.327 --rc genhtml_legend=1 00:30:07.327 --rc geninfo_all_blocks=1 00:30:07.327 --rc geninfo_unexecuted_blocks=1 00:30:07.327 00:30:07.327 ' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:07.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.327 --rc genhtml_branch_coverage=1 00:30:07.327 --rc genhtml_function_coverage=1 00:30:07.327 --rc genhtml_legend=1 00:30:07.327 --rc geninfo_all_blocks=1 00:30:07.327 --rc geninfo_unexecuted_blocks=1 00:30:07.327 00:30:07.327 ' 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.327 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.589 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.590 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.590 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.590 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.590 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.590 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.590 11:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.736 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:15.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:15.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:15.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:15.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:30:15.737 00:30:15.737 --- 10.0.0.2 ping statistics --- 00:30:15.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.737 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:30:15.737 00:30:15.737 --- 10.0.0.1 ping statistics --- 00:30:15.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.737 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.737 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=588675 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 588675 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 588675 ']' 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.738 11:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.738 [2024-11-15 11:10:34.451158] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:15.738 [2024-11-15 11:10:34.452311] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:30:15.738 [2024-11-15 11:10:34.452358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.738 [2024-11-15 11:10:34.554000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:15.738 [2024-11-15 11:10:34.605622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.738 [2024-11-15 11:10:34.605675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.738 [2024-11-15 11:10:34.605684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.738 [2024-11-15 11:10:34.605691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.738 [2024-11-15 11:10:34.605697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.738 [2024-11-15 11:10:34.607834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.738 [2024-11-15 11:10:34.607996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.738 [2024-11-15 11:10:34.607997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.738 [2024-11-15 11:10:34.686124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:15.738 [2024-11-15 11:10:34.687167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:15.738 [2024-11-15 11:10:34.687718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:15.738 [2024-11-15 11:10:34.687857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:15.738 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:15.738 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:15.738 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.738 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 [2024-11-15 11:10:35.312918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 Malloc0 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 Delay0 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 [2024-11-15 11:10:35.408848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.000 11:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:16.000 [2024-11-15 11:10:35.514893] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:18.549 Initializing NVMe Controllers 00:30:18.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:18.549 controller IO queue size 128 less than required 00:30:18.549 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:18.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:18.549 Initialization complete. Launching workers. 00:30:18.549 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28690 00:30:18.549 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28751, failed to submit 66 00:30:18.549 success 28690, unsuccessful 61, failed 0 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.549 rmmod nvme_tcp 00:30:18.549 rmmod nvme_fabrics 00:30:18.549 rmmod nvme_keyring 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 588675 ']' 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 588675 00:30:18.549 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 588675 ']' 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 588675 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 588675 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 588675' 00:30:18.550 killing process with pid 588675 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 588675 00:30:18.550 11:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 588675 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.550 11:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.094 00:30:21.094 real 0m13.449s 00:30:21.094 user 0m11.259s 00:30:21.094 sys 0m6.877s 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.094 ************************************ 00:30:21.094 END TEST nvmf_abort 00:30:21.094 ************************************ 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:21.094 ************************************ 00:30:21.094 START TEST nvmf_ns_hotplug_stress 00:30:21.094 ************************************ 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:21.094 * Looking for test storage... 00:30:21.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:21.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.094 --rc genhtml_branch_coverage=1 00:30:21.094 --rc genhtml_function_coverage=1 00:30:21.094 --rc genhtml_legend=1 00:30:21.094 --rc geninfo_all_blocks=1 00:30:21.094 --rc geninfo_unexecuted_blocks=1 00:30:21.094 00:30:21.094 ' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:21.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.094 --rc genhtml_branch_coverage=1 00:30:21.094 --rc genhtml_function_coverage=1 00:30:21.094 --rc genhtml_legend=1 00:30:21.094 --rc geninfo_all_blocks=1 00:30:21.094 --rc geninfo_unexecuted_blocks=1 00:30:21.094 00:30:21.094 ' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:21.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.094 --rc genhtml_branch_coverage=1 00:30:21.094 --rc genhtml_function_coverage=1 00:30:21.094 --rc genhtml_legend=1 00:30:21.094 --rc geninfo_all_blocks=1 00:30:21.094 --rc geninfo_unexecuted_blocks=1 00:30:21.094 00:30:21.094 ' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:21.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.094 --rc genhtml_branch_coverage=1 00:30:21.094 --rc genhtml_function_coverage=1 00:30:21.094 --rc genhtml_legend=1 00:30:21.094 --rc geninfo_all_blocks=1 00:30:21.094 --rc geninfo_unexecuted_blocks=1 00:30:21.094 00:30:21.094 ' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.094 11:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:29.233 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:29.233 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:29.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:29.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.233 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:30:29.234 00:30:29.234 --- 10.0.0.2 ping statistics --- 00:30:29.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.234 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:30:29.234 00:30:29.234 --- 10.0.0.1 ping statistics --- 00:30:29.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.234 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=593589 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 593589 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 593589 ']' 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:29.234 11:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.234 [2024-11-15 11:10:47.991093] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:29.234 [2024-11-15 11:10:47.992225] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:30:29.234 [2024-11-15 11:10:47.992273] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.234 [2024-11-15 11:10:48.091532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:29.234 [2024-11-15 11:10:48.142435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.234 [2024-11-15 11:10:48.142482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.234 [2024-11-15 11:10:48.142495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.234 [2024-11-15 11:10:48.142502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.234 [2024-11-15 11:10:48.142508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.234 [2024-11-15 11:10:48.144636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.234 [2024-11-15 11:10:48.144838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.234 [2024-11-15 11:10:48.144838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.234 [2024-11-15 11:10:48.223997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:29.234 [2024-11-15 11:10:48.225069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:29.234 [2024-11-15 11:10:48.225487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:29.234 [2024-11-15 11:10:48.225651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:29.496 11:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:29.496 [2024-11-15 11:10:49.017759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.757 11:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:29.757 11:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.018 [2024-11-15 11:10:49.390498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.018 11:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:30.278 11:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:30.278 Malloc0 00:30:30.278 11:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:30.539 Delay0 00:30:30.539 11:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.799 11:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:31.060 NULL1 00:30:31.060 11:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:31.060 11:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=593981 00:30:31.060 11:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:31.060 11:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:31.060 11:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.445 Read completed with error (sct=0, sc=11) 00:30:32.445 11:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.707 11:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:32.707 11:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:32.707 true 00:30:32.707 11:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:32.707 11:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.649 11:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.649 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:33.649 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:33.910 true 00:30:33.910 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:33.910 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.170 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.430 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:34.430 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:34.430 true 00:30:34.430 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:34.430 11:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 11:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.819 11:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:35.819 11:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:36.078 true 00:30:36.079 11:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:36.079 11:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.021 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.021 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:37.021 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:37.281 true 00:30:37.281 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:37.281 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.281 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.541 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:37.541 11:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:37.801 true 00:30:37.801 11:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:37.801 11:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.742 11:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.002 11:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:39.002 11:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:39.262 true 00:30:39.262 11:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:39.262 11:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.203 11:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.203 11:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:40.203 11:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:40.463 true 00:30:40.463 11:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:40.463 11:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.463 11:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.724 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:40.724 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:40.984 true 00:30:40.984 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:40.984 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.284 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.284 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:41.284 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:41.562 true 00:30:41.563 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:41.563 11:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.563 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.831 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:41.831 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:42.109 true 00:30:42.109 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:42.110 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.385 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:42.385 11:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:42.660 true 00:30:42.660 11:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:42.660 11:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.601 11:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.601 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:43.601 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:43.862 true 00:30:43.862 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:43.862 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.122 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.122 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:44.122 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:44.382 true 00:30:44.382 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:44.382 11:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 11:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.767 11:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:45.767 11:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:45.767 true 00:30:45.767 11:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:45.767 11:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.709 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.970 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:46.970 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:46.970 true 00:30:46.970 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:46.970 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.231 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.492 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:47.492 11:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:47.492 true 00:30:47.492 11:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:47.492 11:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 11:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:48.875 11:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:48.875 11:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:49.136 true 00:30:49.136 11:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:49.136 11:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.078 11:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.078 11:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:50.078 11:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:50.339 true 00:30:50.339 11:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:50.339 11:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.599 11:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.599 11:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:50.599 11:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:50.860 true 00:30:50.860 11:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:50.860 11:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 11:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.065 11:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:52.065 11:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:52.328 true 00:30:52.328 11:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:52.328 11:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.270 11:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.270 11:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:53.270 11:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:53.531 true 00:30:53.531 11:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:53.531 11:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.791 11:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.791 11:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:53.791 11:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:54.052 true 00:30:54.052 11:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:54.052 11:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 11:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.436 11:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:55.436 11:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:55.436 true 00:30:55.436 11:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:55.436 11:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.377 11:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.638 11:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:56.638 11:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:56.638 true 00:30:56.638 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:56.638 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.899 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.159 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:57.159 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:57.159 true 00:30:57.160 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:57.160 11:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 11:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.545 11:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:58.545 11:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:58.806 true 00:30:58.806 11:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:30:58.806 11:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.748 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.748 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:59.748 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:00.007 true 00:31:00.007 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:31:00.007 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.266 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.266 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:00.266 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:00.528 true 00:31:00.528 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:31:00.528 11:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.910 Initializing NVMe Controllers 00:31:01.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.910 Controller IO queue size 128, less than required. 00:31:01.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.910 Controller IO queue size 128, less than required. 00:31:01.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:01.910 Initialization complete. Launching workers. 00:31:01.910 ======================================================== 00:31:01.910 Latency(us) 00:31:01.910 Device Information : IOPS MiB/s Average min max 00:31:01.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2443.12 1.19 35740.65 1658.92 1022629.71 00:31:01.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19826.06 9.68 6462.75 1122.24 400425.84 00:31:01.910 ======================================================== 00:31:01.910 Total : 22269.19 10.87 9674.79 1122.24 1022629.71 00:31:01.910 00:31:01.910 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.910 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:01.910 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:01.910 true 00:31:01.910 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593981 00:31:01.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (593981) - No such process 00:31:02.170 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 593981 00:31:02.170 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.170 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.431 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:02.431 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:02.431 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:02.431 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.431 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:02.692 null0 00:31:02.692 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.692 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.692 11:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:02.692 null1 00:31:02.692 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.692 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.692 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:02.953 null2 00:31:02.953 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.953 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.953 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:02.953 null3 00:31:02.953 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.953 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.953 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:03.214 null4 00:31:03.214 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:03.214 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:03.214 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:03.474 null5 00:31:03.474 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:03.474 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:03.474 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:03.474 null6 00:31:03.474 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:03.474 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:03.474 11:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:03.735 null7 00:31:03.735 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 600400 600401 600404 600405 600407 600409 600411 600413 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.736 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.997 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.258 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.519 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.519 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.520 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.780 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.042 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.304 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.565 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.565 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.565 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.565 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.565 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.565 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.566 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.566 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.566 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.566 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.566 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.566 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.827 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.828 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.828 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.828 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.828 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:06.089 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.351 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.352 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:06.613 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:06.614 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:06.614 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:06.614 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:06.614 11:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.614 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.875 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:07.136 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.396 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.397 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.397 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.397 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.397 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.657 11:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.657 rmmod nvme_tcp 00:31:07.657 rmmod nvme_fabrics 00:31:07.657 rmmod nvme_keyring 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 593589 ']' 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 593589 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 593589 ']' 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 593589 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 593589 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 593589' 00:31:07.657 killing process with pid 593589 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 593589 00:31:07.657 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 593589 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.917 11:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.827 00:31:09.827 real 0m49.099s 00:31:09.827 user 2m54.969s 00:31:09.827 sys 0m20.273s 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:09.827 ************************************ 00:31:09.827 END TEST nvmf_ns_hotplug_stress 00:31:09.827 ************************************ 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:09.827 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.088 ************************************ 00:31:10.088 START TEST nvmf_delete_subsystem 00:31:10.088 ************************************ 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:10.088 * Looking for test storage... 00:31:10.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.088 --rc genhtml_branch_coverage=1 00:31:10.088 --rc genhtml_function_coverage=1 00:31:10.088 --rc genhtml_legend=1 00:31:10.088 --rc geninfo_all_blocks=1 00:31:10.088 --rc geninfo_unexecuted_blocks=1 00:31:10.088 00:31:10.088 ' 00:31:10.088 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.088 --rc genhtml_branch_coverage=1 00:31:10.088 --rc genhtml_function_coverage=1 00:31:10.088 --rc genhtml_legend=1 00:31:10.088 --rc geninfo_all_blocks=1 00:31:10.089 --rc geninfo_unexecuted_blocks=1 00:31:10.089 00:31:10.089 ' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:10.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.089 --rc genhtml_branch_coverage=1 00:31:10.089 --rc genhtml_function_coverage=1 00:31:10.089 --rc genhtml_legend=1 00:31:10.089 --rc geninfo_all_blocks=1 00:31:10.089 --rc geninfo_unexecuted_blocks=1 00:31:10.089 00:31:10.089 ' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:10.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.089 --rc genhtml_branch_coverage=1 00:31:10.089 --rc genhtml_function_coverage=1 00:31:10.089 --rc genhtml_legend=1 00:31:10.089 --rc geninfo_all_blocks=1 00:31:10.089 --rc geninfo_unexecuted_blocks=1 00:31:10.089 00:31:10.089 ' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.089 11:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:18.236 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:18.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:18.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:18.237 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:18.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:18.237 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:18.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:31:18.237 00:31:18.237 --- 10.0.0.2 ping statistics --- 00:31:18.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.237 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:18.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:31:18.237 00:31:18.237 --- 10.0.0.1 ping statistics --- 00:31:18.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.237 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:18.237 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=605559 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 605559 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 605559 ']' 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:18.238 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.238 [2024-11-15 11:11:37.205220] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:18.238 [2024-11-15 11:11:37.206357] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:31:18.238 [2024-11-15 11:11:37.206405] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.238 [2024-11-15 11:11:37.306217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.238 [2024-11-15 11:11:37.357145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.238 [2024-11-15 11:11:37.357193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.238 [2024-11-15 11:11:37.357202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.238 [2024-11-15 11:11:37.357215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.238 [2024-11-15 11:11:37.357222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.238 [2024-11-15 11:11:37.358928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.238 [2024-11-15 11:11:37.358931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.238 [2024-11-15 11:11:37.436633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:18.238 [2024-11-15 11:11:37.437228] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:18.238 [2024-11-15 11:11:37.437527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:18.498 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:18.498 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:18.498 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:18.498 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:18.498 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 [2024-11-15 11:11:38.071983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 [2024-11-15 11:11:38.104435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 NULL1 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 Delay0 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=605587 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:18.760 11:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:18.760 [2024-11-15 11:11:38.228027] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:20.719 11:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.719 11:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.719 11:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Write completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 starting I/O failed: -6 00:31:20.980 [2024-11-15 11:11:40.367810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d860 is same with the state(6) to be set 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.980 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 [2024-11-15 11:11:40.368667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d4a0 is same with the state(6) to be set 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Write completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 Read completed with error (sct=0, sc=8) 00:31:20.981 starting I/O failed: -6 00:31:21.922 [2024-11-15 11:11:41.327061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e9a0 is same with the state(6) to be set 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Write completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Write completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.922 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 [2024-11-15 11:11:41.371816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d680 is same with the state(6) to be set 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 [2024-11-15 11:11:41.373518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe37000d020 is same with the state(6) to be set 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 [2024-11-15 11:11:41.373842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe37000d7c0 is same with the state(6) to be set 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Write completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 Read completed with error (sct=0, sc=8) 00:31:21.923 [2024-11-15 11:11:41.373952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe370000c40 is same with the state(6) to be set 00:31:21.923 Initializing NVMe Controllers 00:31:21.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.923 Controller IO queue size 128, less than required. 00:31:21.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:21.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:21.923 Initialization complete. Launching workers. 00:31:21.923 ======================================================== 00:31:21.923 Latency(us) 00:31:21.923 Device Information : IOPS MiB/s Average min max 00:31:21.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.92 0.08 867217.38 358.22 1011698.70 00:31:21.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.00 0.07 1079603.07 387.82 2003078.63 00:31:21.923 ======================================================== 00:31:21.923 Total : 303.93 0.15 969245.80 358.22 2003078.63 00:31:21.923 00:31:21.923 [2024-11-15 11:11:41.374533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114e9a0 (9): Bad file descriptor 00:31:21.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:21.923 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.923 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:21.923 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 605587 00:31:21.923 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 605587 00:31:22.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (605587) - No such process 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 605587 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 605587 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 605587 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 [2024-11-15 11:11:41.908500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=606304 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:22.496 11:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.496 [2024-11-15 11:11:42.007271] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:23.066 11:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.066 11:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:23.066 11:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.636 11:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.636 11:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:23.636 11:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.206 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.206 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:24.206 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.466 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.466 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:24.466 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:25.036 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:25.036 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:25.036 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:25.607 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:25.607 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:25.607 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:25.867 Initializing NVMe Controllers 00:31:25.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.867 Controller IO queue size 128, less than required. 00:31:25.867 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:25.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:25.867 Initialization complete. Launching workers. 00:31:25.867 ======================================================== 00:31:25.867 Latency(us) 00:31:25.867 Device Information : IOPS MiB/s Average min max 00:31:25.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002585.39 1000277.25 1042378.74 00:31:25.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003810.03 1000449.91 1041764.25 00:31:25.867 ======================================================== 00:31:25.867 Total : 256.00 0.12 1003197.71 1000277.25 1042378.74 00:31:25.867 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 606304 00:31:26.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (606304) - No such process 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 606304 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.128 rmmod nvme_tcp 00:31:26.128 rmmod nvme_fabrics 00:31:26.128 rmmod nvme_keyring 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 605559 ']' 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 605559 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 605559 ']' 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 605559 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 605559 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 605559' 00:31:26.128 killing process with pid 605559 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 605559 00:31:26.128 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 605559 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.389 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:28.302 00:31:28.302 real 0m18.418s 00:31:28.302 user 0m26.606s 00:31:28.302 sys 0m7.624s 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.302 ************************************ 00:31:28.302 END TEST nvmf_delete_subsystem 00:31:28.302 ************************************ 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:28.302 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:28.563 ************************************ 00:31:28.563 START TEST nvmf_host_management 00:31:28.563 ************************************ 00:31:28.563 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:28.563 * Looking for test storage... 00:31:28.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.563 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:28.563 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:28.563 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.563 --rc genhtml_branch_coverage=1 00:31:28.563 --rc genhtml_function_coverage=1 00:31:28.563 --rc genhtml_legend=1 00:31:28.563 --rc geninfo_all_blocks=1 00:31:28.563 --rc geninfo_unexecuted_blocks=1 00:31:28.563 00:31:28.563 ' 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.563 --rc genhtml_branch_coverage=1 00:31:28.563 --rc genhtml_function_coverage=1 00:31:28.563 --rc genhtml_legend=1 00:31:28.563 --rc geninfo_all_blocks=1 00:31:28.563 --rc geninfo_unexecuted_blocks=1 00:31:28.563 00:31:28.563 ' 00:31:28.563 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.563 --rc genhtml_branch_coverage=1 00:31:28.564 --rc genhtml_function_coverage=1 00:31:28.564 --rc genhtml_legend=1 00:31:28.564 --rc geninfo_all_blocks=1 00:31:28.564 --rc geninfo_unexecuted_blocks=1 00:31:28.564 00:31:28.564 ' 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.564 --rc genhtml_branch_coverage=1 00:31:28.564 --rc genhtml_function_coverage=1 00:31:28.564 --rc genhtml_legend=1 00:31:28.564 --rc geninfo_all_blocks=1 00:31:28.564 --rc geninfo_unexecuted_blocks=1 00:31:28.564 00:31:28.564 ' 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.564 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.825 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.826 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.973 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.973 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.973 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.973 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.973 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:36.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:36.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:36.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:36.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.974 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:31:36.975 00:31:36.975 --- 10.0.0.2 ping statistics --- 00:31:36.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.975 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:31:36.975 00:31:36.975 --- 10.0.0.1 ping statistics --- 00:31:36.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.975 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=611250 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 611250 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 611250 ']' 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:36.975 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.975 [2024-11-15 11:11:55.662280] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.975 [2024-11-15 11:11:55.663396] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:31:36.975 [2024-11-15 11:11:55.663443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.975 [2024-11-15 11:11:55.761647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.975 [2024-11-15 11:11:55.814798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.975 [2024-11-15 11:11:55.814847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.975 [2024-11-15 11:11:55.814856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.975 [2024-11-15 11:11:55.814864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.975 [2024-11-15 11:11:55.814870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.975 [2024-11-15 11:11:55.816930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.975 [2024-11-15 11:11:55.817092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.975 [2024-11-15 11:11:55.817229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.975 [2024-11-15 11:11:55.817230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:36.975 [2024-11-15 11:11:55.895889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.975 [2024-11-15 11:11:55.896962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.975 [2024-11-15 11:11:55.897253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:36.975 [2024-11-15 11:11:55.897642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.975 [2024-11-15 11:11:55.897697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.975 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:36.975 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:36.975 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.975 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:36.975 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 [2024-11-15 11:11:56.518108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 Malloc0 00:31:37.239 [2024-11-15 11:11:56.622385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=611474 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 611474 /var/tmp/bdevperf.sock 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 611474 ']' 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:37.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:37.239 { 00:31:37.239 "params": { 00:31:37.239 "name": "Nvme$subsystem", 00:31:37.239 "trtype": "$TEST_TRANSPORT", 00:31:37.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.239 "adrfam": "ipv4", 00:31:37.239 "trsvcid": "$NVMF_PORT", 00:31:37.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.239 "hdgst": ${hdgst:-false}, 00:31:37.239 "ddgst": ${ddgst:-false} 00:31:37.239 }, 00:31:37.239 "method": "bdev_nvme_attach_controller" 00:31:37.239 } 00:31:37.239 EOF 00:31:37.239 )") 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:37.239 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:37.239 "params": { 00:31:37.239 "name": "Nvme0", 00:31:37.239 "trtype": "tcp", 00:31:37.239 "traddr": "10.0.0.2", 00:31:37.239 "adrfam": "ipv4", 00:31:37.239 "trsvcid": "4420", 00:31:37.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.239 "hdgst": false, 00:31:37.239 "ddgst": false 00:31:37.239 }, 00:31:37.239 "method": "bdev_nvme_attach_controller" 00:31:37.239 }' 00:31:37.239 [2024-11-15 11:11:56.732480] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:31:37.239 [2024-11-15 11:11:56.732553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611474 ] 00:31:37.501 [2024-11-15 11:11:56.828347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.501 [2024-11-15 11:11:56.882089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.762 Running I/O for 10 seconds... 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.337 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.337 [2024-11-15 11:11:57.641766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0af20 is same with the state(6) to be set 00:31:38.337 [2024-11-15 11:11:57.642117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.337 [2024-11-15 11:11:57.642640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.337 [2024-11-15 11:11:57.642649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.642986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.642992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.643297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.338 [2024-11-15 11:11:57.643305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.338 [2024-11-15 11:11:57.644645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:38.338 task offset: 105472 on job bdev=Nvme0n1 fails 00:31:38.338 00:31:38.338 Latency(us) 00:31:38.338 [2024-11-15T10:11:57.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.339 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.339 Job: Nvme0n1 ended in about 0.59 seconds with error 00:31:38.339 Verification LBA range: start 0x0 length 0x400 00:31:38.339 Nvme0n1 : 0.59 1291.95 80.75 107.66 0.00 44690.03 1658.88 39758.51 00:31:38.339 [2024-11-15T10:11:57.866Z] =================================================================================================================== 00:31:38.339 [2024-11-15T10:11:57.866Z] Total : 1291.95 80.75 107.66 0.00 44690.03 1658.88 39758.51 00:31:38.339 [2024-11-15 11:11:57.646901] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:38.339 [2024-11-15 11:11:57.646941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa05000 (9): Bad file descriptor 00:31:38.339 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.339 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:38.339 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.339 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 [2024-11-15 11:11:57.648612] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:38.339 [2024-11-15 11:11:57.648743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:38.339 [2024-11-15 11:11:57.648787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.339 [2024-11-15 11:11:57.648807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:38.339 [2024-11-15 11:11:57.648816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:38.339 [2024-11-15 11:11:57.648837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.339 [2024-11-15 11:11:57.648844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa05000 00:31:38.339 [2024-11-15 11:11:57.648871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa05000 (9): Bad file descriptor 00:31:38.339 [2024-11-15 11:11:57.648886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:38.339 [2024-11-15 11:11:57.648894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:38.339 [2024-11-15 11:11:57.648904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:38.339 [2024-11-15 11:11:57.648914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:38.339 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.339 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 611474 00:31:39.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (611474) - No such process 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:39.284 { 00:31:39.284 "params": { 00:31:39.284 "name": "Nvme$subsystem", 00:31:39.284 "trtype": "$TEST_TRANSPORT", 00:31:39.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.284 "adrfam": "ipv4", 00:31:39.284 "trsvcid": "$NVMF_PORT", 00:31:39.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.284 "hdgst": ${hdgst:-false}, 00:31:39.284 "ddgst": ${ddgst:-false} 00:31:39.284 }, 00:31:39.284 "method": "bdev_nvme_attach_controller" 00:31:39.284 } 00:31:39.284 EOF 00:31:39.284 )") 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:39.284 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:39.284 "params": { 00:31:39.284 "name": "Nvme0", 00:31:39.284 "trtype": "tcp", 00:31:39.284 "traddr": "10.0.0.2", 00:31:39.284 "adrfam": "ipv4", 00:31:39.284 "trsvcid": "4420", 00:31:39.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:39.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:39.284 "hdgst": false, 00:31:39.284 "ddgst": false 00:31:39.284 }, 00:31:39.284 "method": "bdev_nvme_attach_controller" 00:31:39.284 }' 00:31:39.284 [2024-11-15 11:11:58.731578] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:31:39.284 [2024-11-15 11:11:58.731655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611899 ] 00:31:39.544 [2024-11-15 11:11:58.825704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.544 [2024-11-15 11:11:58.878292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.804 Running I/O for 1 seconds... 00:31:40.746 1669.00 IOPS, 104.31 MiB/s 00:31:40.746 Latency(us) 00:31:40.746 [2024-11-15T10:12:00.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:40.746 Verification LBA range: start 0x0 length 0x400 00:31:40.746 Nvme0n1 : 1.02 1705.51 106.59 0.00 0.00 36769.45 2484.91 36044.80 00:31:40.746 [2024-11-15T10:12:00.273Z] =================================================================================================================== 00:31:40.746 [2024-11-15T10:12:00.273Z] Total : 1705.51 106.59 0.00 0.00 36769.45 2484.91 36044.80 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.746 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.746 rmmod nvme_tcp 00:31:40.746 rmmod nvme_fabrics 00:31:41.007 rmmod nvme_keyring 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 611250 ']' 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 611250 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 611250 ']' 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 611250 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 611250 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 611250' 00:31:41.007 killing process with pid 611250 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 611250 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 611250 00:31:41.007 [2024-11-15 11:12:00.473884] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.007 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:43.551 00:31:43.551 real 0m14.715s 00:31:43.551 user 0m19.476s 00:31:43.551 sys 0m7.551s 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.551 ************************************ 00:31:43.551 END TEST nvmf_host_management 00:31:43.551 ************************************ 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:43.551 ************************************ 00:31:43.551 START TEST nvmf_lvol 00:31:43.551 ************************************ 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:43.551 * Looking for test storage... 00:31:43.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:43.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.551 --rc genhtml_branch_coverage=1 00:31:43.551 --rc genhtml_function_coverage=1 00:31:43.551 --rc genhtml_legend=1 00:31:43.551 --rc geninfo_all_blocks=1 00:31:43.551 --rc geninfo_unexecuted_blocks=1 00:31:43.551 00:31:43.551 ' 00:31:43.551 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.552 --rc genhtml_branch_coverage=1 00:31:43.552 --rc genhtml_function_coverage=1 00:31:43.552 --rc genhtml_legend=1 00:31:43.552 --rc geninfo_all_blocks=1 00:31:43.552 --rc geninfo_unexecuted_blocks=1 00:31:43.552 00:31:43.552 ' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.552 --rc genhtml_branch_coverage=1 00:31:43.552 --rc genhtml_function_coverage=1 00:31:43.552 --rc genhtml_legend=1 00:31:43.552 --rc geninfo_all_blocks=1 00:31:43.552 --rc geninfo_unexecuted_blocks=1 00:31:43.552 00:31:43.552 ' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.552 --rc genhtml_branch_coverage=1 00:31:43.552 --rc genhtml_function_coverage=1 00:31:43.552 --rc genhtml_legend=1 00:31:43.552 --rc geninfo_all_blocks=1 00:31:43.552 --rc geninfo_unexecuted_blocks=1 00:31:43.552 00:31:43.552 ' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:43.552 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:51.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:51.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:51.692 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:51.692 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.692 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.692 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:31:51.693 00:31:51.693 --- 10.0.0.2 ping statistics --- 00:31:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.693 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:31:51.693 00:31:51.693 --- 10.0.0.1 ping statistics --- 00:31:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.693 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=616673 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 616673 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 616673 ']' 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:51.693 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.693 [2024-11-15 11:12:10.368681] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:51.693 [2024-11-15 11:12:10.369792] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:31:51.693 [2024-11-15 11:12:10.369841] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.693 [2024-11-15 11:12:10.471013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:51.693 [2024-11-15 11:12:10.523612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.693 [2024-11-15 11:12:10.523662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.693 [2024-11-15 11:12:10.523671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.693 [2024-11-15 11:12:10.523678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.693 [2024-11-15 11:12:10.523684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.693 [2024-11-15 11:12:10.525618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.693 [2024-11-15 11:12:10.525714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.693 [2024-11-15 11:12:10.525714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.693 [2024-11-15 11:12:10.604633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:51.693 [2024-11-15 11:12:10.605705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:51.693 [2024-11-15 11:12:10.605954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:51.693 [2024-11-15 11:12:10.606153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:51.693 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:51.693 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:51.693 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:51.693 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:51.693 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.954 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.954 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:51.954 [2024-11-15 11:12:11.390809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.954 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:52.215 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:52.215 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:52.476 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:52.476 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:52.737 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:52.737 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=755a2c72-c861-4030-9239-ddf8f888f760 00:31:52.737 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 755a2c72-c861-4030-9239-ddf8f888f760 lvol 20 00:31:52.998 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=23829527-e43f-4e10-9cb2-5cdace4fd23c 00:31:52.998 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:53.259 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23829527-e43f-4e10-9cb2-5cdace4fd23c 00:31:53.520 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:53.520 [2024-11-15 11:12:12.978721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.520 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:53.781 11:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=617548 00:31:53.781 11:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:53.781 11:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:54.724 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 23829527-e43f-4e10-9cb2-5cdace4fd23c MY_SNAPSHOT 00:31:54.986 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=77b34af2-2d7f-4f8b-ba9f-68696446f0e7 00:31:54.986 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 23829527-e43f-4e10-9cb2-5cdace4fd23c 30 00:31:55.248 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 77b34af2-2d7f-4f8b-ba9f-68696446f0e7 MY_CLONE 00:31:55.509 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ded26f47-99b1-4eed-9a04-630cf4c4f5b2 00:31:55.509 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ded26f47-99b1-4eed-9a04-630cf4c4f5b2 00:31:56.081 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 617548 00:32:04.225 Initializing NVMe Controllers 00:32:04.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:04.225 Controller IO queue size 128, less than required. 00:32:04.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:04.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:04.226 Initialization complete. Launching workers. 00:32:04.226 ======================================================== 00:32:04.226 Latency(us) 00:32:04.226 Device Information : IOPS MiB/s Average min max 00:32:04.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15195.60 59.36 8424.32 797.65 54864.04 00:32:04.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14998.00 58.59 8534.81 2860.48 62244.89 00:32:04.226 ======================================================== 00:32:04.226 Total : 30193.60 117.94 8479.20 797.65 62244.89 00:32:04.226 00:32:04.226 11:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.226 11:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 23829527-e43f-4e10-9cb2-5cdace4fd23c 00:32:04.486 11:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 755a2c72-c861-4030-9239-ddf8f888f760 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.748 rmmod nvme_tcp 00:32:04.748 rmmod nvme_fabrics 00:32:04.748 rmmod nvme_keyring 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 616673 ']' 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 616673 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 616673 ']' 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 616673 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 616673 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 616673' 00:32:04.748 killing process with pid 616673 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 616673 00:32:04.748 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 616673 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.009 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.923 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.923 00:32:06.923 real 0m23.754s 00:32:06.923 user 0m55.854s 00:32:06.923 sys 0m10.634s 00:32:06.923 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:06.923 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:06.923 ************************************ 00:32:06.923 END TEST nvmf_lvol 00:32:06.923 ************************************ 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:07.185 ************************************ 00:32:07.185 START TEST nvmf_lvs_grow 00:32:07.185 ************************************ 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:07.185 * Looking for test storage... 00:32:07.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.185 --rc genhtml_branch_coverage=1 00:32:07.185 --rc genhtml_function_coverage=1 00:32:07.185 --rc genhtml_legend=1 00:32:07.185 --rc geninfo_all_blocks=1 00:32:07.185 --rc geninfo_unexecuted_blocks=1 00:32:07.185 00:32:07.185 ' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.185 --rc genhtml_branch_coverage=1 00:32:07.185 --rc genhtml_function_coverage=1 00:32:07.185 --rc genhtml_legend=1 00:32:07.185 --rc geninfo_all_blocks=1 00:32:07.185 --rc geninfo_unexecuted_blocks=1 00:32:07.185 00:32:07.185 ' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.185 --rc genhtml_branch_coverage=1 00:32:07.185 --rc genhtml_function_coverage=1 00:32:07.185 --rc genhtml_legend=1 00:32:07.185 --rc geninfo_all_blocks=1 00:32:07.185 --rc geninfo_unexecuted_blocks=1 00:32:07.185 00:32:07.185 ' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.185 --rc genhtml_branch_coverage=1 00:32:07.185 --rc genhtml_function_coverage=1 00:32:07.185 --rc genhtml_legend=1 00:32:07.185 --rc geninfo_all_blocks=1 00:32:07.185 --rc geninfo_unexecuted_blocks=1 00:32:07.185 00:32:07.185 ' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.185 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.447 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.448 11:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.757 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.757 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.757 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:15.758 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:15.758 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:15.758 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:15.758 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.758 11:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.758 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:32:15.758 00:32:15.758 --- 10.0.0.2 ping statistics --- 00:32:15.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.758 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:32:15.759 00:32:15.759 --- 10.0.0.1 ping statistics --- 00:32:15.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.759 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=623662 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 623662 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 623662 ']' 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:15.759 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.759 [2024-11-15 11:12:34.286484] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:15.759 [2024-11-15 11:12:34.287622] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:32:15.759 [2024-11-15 11:12:34.287672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.759 [2024-11-15 11:12:34.386763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.759 [2024-11-15 11:12:34.438689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.759 [2024-11-15 11:12:34.438741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.759 [2024-11-15 11:12:34.438750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.759 [2024-11-15 11:12:34.438757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.759 [2024-11-15 11:12:34.438764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.759 [2024-11-15 11:12:34.439536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.759 [2024-11-15 11:12:34.517503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:15.759 [2024-11-15 11:12:34.517807] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.759 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:16.021 [2024-11-15 11:12:35.300404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:16.021 ************************************ 00:32:16.021 START TEST lvs_grow_clean 00:32:16.021 ************************************ 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.021 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:16.282 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:16.282 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:16.282 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:16.282 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:16.282 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:16.544 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:16.544 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:16.544 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 lvol 150 00:32:16.805 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f 00:32:16.805 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.805 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:16.805 [2024-11-15 11:12:36.312103] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:16.805 [2024-11-15 11:12:36.312266] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:16.805 true 00:32:17.066 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:17.066 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:17.066 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:17.066 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:17.327 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f 00:32:17.588 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.588 [2024-11-15 11:12:37.036793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.588 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=624325 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 624325 /var/tmp/bdevperf.sock 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 624325 ']' 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:17.849 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:17.849 [2024-11-15 11:12:37.274200] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:32:17.849 [2024-11-15 11:12:37.274266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624325 ] 00:32:17.849 [2024-11-15 11:12:37.367052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.111 [2024-11-15 11:12:37.419379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.683 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:18.683 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:18.683 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:18.944 Nvme0n1 00:32:18.944 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:19.206 [ 00:32:19.206 { 00:32:19.206 "name": "Nvme0n1", 00:32:19.206 "aliases": [ 00:32:19.206 "a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f" 00:32:19.206 ], 00:32:19.206 "product_name": "NVMe disk", 00:32:19.206 "block_size": 4096, 00:32:19.206 "num_blocks": 38912, 00:32:19.206 "uuid": "a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f", 00:32:19.206 "numa_id": 0, 00:32:19.206 "assigned_rate_limits": { 00:32:19.206 "rw_ios_per_sec": 0, 00:32:19.206 "rw_mbytes_per_sec": 0, 00:32:19.206 "r_mbytes_per_sec": 0, 00:32:19.206 "w_mbytes_per_sec": 0 00:32:19.206 }, 00:32:19.206 "claimed": false, 00:32:19.206 "zoned": false, 00:32:19.206 "supported_io_types": { 00:32:19.206 "read": true, 00:32:19.206 "write": true, 00:32:19.206 "unmap": true, 00:32:19.206 "flush": true, 00:32:19.206 "reset": true, 00:32:19.206 "nvme_admin": true, 00:32:19.206 "nvme_io": true, 00:32:19.206 "nvme_io_md": false, 00:32:19.206 "write_zeroes": true, 00:32:19.206 "zcopy": false, 00:32:19.206 "get_zone_info": false, 00:32:19.206 "zone_management": false, 00:32:19.206 "zone_append": false, 00:32:19.206 "compare": true, 00:32:19.206 "compare_and_write": true, 00:32:19.206 "abort": true, 00:32:19.206 "seek_hole": false, 00:32:19.206 "seek_data": false, 00:32:19.206 "copy": true, 00:32:19.206 "nvme_iov_md": false 00:32:19.206 }, 00:32:19.206 "memory_domains": [ 00:32:19.206 { 00:32:19.206 "dma_device_id": "system", 00:32:19.206 "dma_device_type": 1 00:32:19.206 } 00:32:19.206 ], 00:32:19.206 "driver_specific": { 00:32:19.206 "nvme": [ 00:32:19.206 { 00:32:19.206 "trid": { 00:32:19.206 "trtype": "TCP", 00:32:19.206 "adrfam": "IPv4", 00:32:19.206 "traddr": "10.0.0.2", 00:32:19.206 "trsvcid": "4420", 00:32:19.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:19.206 }, 00:32:19.206 "ctrlr_data": { 00:32:19.206 "cntlid": 1, 00:32:19.206 "vendor_id": "0x8086", 00:32:19.206 "model_number": "SPDK bdev Controller", 00:32:19.206 "serial_number": "SPDK0", 00:32:19.206 "firmware_revision": "25.01", 00:32:19.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.206 "oacs": { 00:32:19.206 "security": 0, 00:32:19.206 "format": 0, 00:32:19.206 "firmware": 0, 00:32:19.206 "ns_manage": 0 00:32:19.206 }, 00:32:19.206 "multi_ctrlr": true, 00:32:19.206 "ana_reporting": false 00:32:19.206 }, 00:32:19.206 "vs": { 00:32:19.206 "nvme_version": "1.3" 00:32:19.206 }, 00:32:19.206 "ns_data": { 00:32:19.206 "id": 1, 00:32:19.206 "can_share": true 00:32:19.206 } 00:32:19.206 } 00:32:19.206 ], 00:32:19.206 "mp_policy": "active_passive" 00:32:19.206 } 00:32:19.206 } 00:32:19.206 ] 00:32:19.206 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=624659 00:32:19.206 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:19.206 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:19.206 Running I/O for 10 seconds... 00:32:20.149 Latency(us) 00:32:20.149 [2024-11-15T10:12:39.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.149 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:20.149 [2024-11-15T10:12:39.676Z] =================================================================================================================== 00:32:20.149 [2024-11-15T10:12:39.676Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:20.149 00:32:21.091 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:21.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.352 Nvme0n1 : 2.00 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:32:21.352 [2024-11-15T10:12:40.879Z] =================================================================================================================== 00:32:21.352 [2024-11-15T10:12:40.879Z] Total : 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:32:21.352 00:32:21.352 true 00:32:21.352 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:21.352 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:21.613 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:21.613 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:21.613 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 624659 00:32:22.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.182 Nvme0n1 : 3.00 17187.33 67.14 0.00 0.00 0.00 0.00 0.00 00:32:22.182 [2024-11-15T10:12:41.709Z] =================================================================================================================== 00:32:22.182 [2024-11-15T10:12:41.709Z] Total : 17187.33 67.14 0.00 0.00 0.00 0.00 0.00 00:32:22.182 00:32:23.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.561 Nvme0n1 : 4.00 17669.00 69.02 0.00 0.00 0.00 0.00 0.00 00:32:23.561 [2024-11-15T10:12:43.088Z] =================================================================================================================== 00:32:23.561 [2024-11-15T10:12:43.088Z] Total : 17669.00 69.02 0.00 0.00 0.00 0.00 0.00 00:32:23.561 00:32:24.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.501 Nvme0n1 : 5.00 19212.20 75.05 0.00 0.00 0.00 0.00 0.00 00:32:24.501 [2024-11-15T10:12:44.028Z] =================================================================================================================== 00:32:24.501 [2024-11-15T10:12:44.028Z] Total : 19212.20 75.05 0.00 0.00 0.00 0.00 0.00 00:32:24.501 00:32:25.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.442 Nvme0n1 : 6.00 20243.50 79.08 0.00 0.00 0.00 0.00 0.00 00:32:25.442 [2024-11-15T10:12:44.969Z] =================================================================================================================== 00:32:25.442 [2024-11-15T10:12:44.969Z] Total : 20243.50 79.08 0.00 0.00 0.00 0.00 0.00 00:32:25.442 00:32:26.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.380 Nvme0n1 : 7.00 20980.14 81.95 0.00 0.00 0.00 0.00 0.00 00:32:26.380 [2024-11-15T10:12:45.907Z] =================================================================================================================== 00:32:26.380 [2024-11-15T10:12:45.907Z] Total : 20980.14 81.95 0.00 0.00 0.00 0.00 0.00 00:32:26.380 00:32:27.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.319 Nvme0n1 : 8.00 21532.62 84.11 0.00 0.00 0.00 0.00 0.00 00:32:27.319 [2024-11-15T10:12:46.846Z] =================================================================================================================== 00:32:27.319 [2024-11-15T10:12:46.846Z] Total : 21532.62 84.11 0.00 0.00 0.00 0.00 0.00 00:32:27.319 00:32:28.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.260 Nvme0n1 : 9.00 21962.33 85.79 0.00 0.00 0.00 0.00 0.00 00:32:28.260 [2024-11-15T10:12:47.787Z] =================================================================================================================== 00:32:28.260 [2024-11-15T10:12:47.787Z] Total : 21962.33 85.79 0.00 0.00 0.00 0.00 0.00 00:32:28.260 00:32:29.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.202 Nvme0n1 : 10.00 22306.10 87.13 0.00 0.00 0.00 0.00 0.00 00:32:29.202 [2024-11-15T10:12:48.729Z] =================================================================================================================== 00:32:29.202 [2024-11-15T10:12:48.729Z] Total : 22306.10 87.13 0.00 0.00 0.00 0.00 0.00 00:32:29.202 00:32:29.202 00:32:29.202 Latency(us) 00:32:29.202 [2024-11-15T10:12:48.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.202 Nvme0n1 : 10.01 22306.77 87.14 0.00 0.00 5735.22 2908.16 32112.64 00:32:29.202 [2024-11-15T10:12:48.729Z] =================================================================================================================== 00:32:29.203 [2024-11-15T10:12:48.730Z] Total : 22306.77 87.14 0.00 0.00 5735.22 2908.16 32112.64 00:32:29.203 { 00:32:29.203 "results": [ 00:32:29.203 { 00:32:29.203 "job": "Nvme0n1", 00:32:29.203 "core_mask": "0x2", 00:32:29.203 "workload": "randwrite", 00:32:29.203 "status": "finished", 00:32:29.203 "queue_depth": 128, 00:32:29.203 "io_size": 4096, 00:32:29.203 "runtime": 10.005437, 00:32:29.203 "iops": 22306.771808167898, 00:32:29.203 "mibps": 87.13582737565585, 00:32:29.203 "io_failed": 0, 00:32:29.203 "io_timeout": 0, 00:32:29.203 "avg_latency_us": 5735.224195457661, 00:32:29.203 "min_latency_us": 2908.16, 00:32:29.203 "max_latency_us": 32112.64 00:32:29.203 } 00:32:29.203 ], 00:32:29.203 "core_count": 1 00:32:29.203 } 00:32:29.203 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 624325 00:32:29.203 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 624325 ']' 00:32:29.203 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 624325 00:32:29.203 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:29.203 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:29.203 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 624325 00:32:29.467 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:29.467 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:29.467 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 624325' 00:32:29.467 killing process with pid 624325 00:32:29.467 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 624325 00:32:29.467 Received shutdown signal, test time was about 10.000000 seconds 00:32:29.467 00:32:29.467 Latency(us) 00:32:29.467 [2024-11-15T10:12:48.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.467 [2024-11-15T10:12:48.994Z] =================================================================================================================== 00:32:29.467 [2024-11-15T10:12:48.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.467 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 624325 00:32:29.467 11:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.736 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:29.736 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:29.736 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:29.997 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:29.997 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:29.997 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:30.258 [2024-11-15 11:12:49.560171] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:30.258 request: 00:32:30.258 { 00:32:30.258 "uuid": "16b9732a-79fc-42a6-86b9-7f88dc978d82", 00:32:30.258 "method": "bdev_lvol_get_lvstores", 00:32:30.258 "req_id": 1 00:32:30.258 } 00:32:30.258 Got JSON-RPC error response 00:32:30.258 response: 00:32:30.258 { 00:32:30.258 "code": -19, 00:32:30.258 "message": "No such device" 00:32:30.258 } 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:30.258 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:30.519 aio_bdev 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:30.519 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:30.780 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f -t 2000 00:32:30.780 [ 00:32:30.780 { 00:32:30.780 "name": "a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f", 00:32:30.780 "aliases": [ 00:32:30.780 "lvs/lvol" 00:32:30.780 ], 00:32:30.780 "product_name": "Logical Volume", 00:32:30.780 "block_size": 4096, 00:32:30.780 "num_blocks": 38912, 00:32:30.780 "uuid": "a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f", 00:32:30.780 "assigned_rate_limits": { 00:32:30.780 "rw_ios_per_sec": 0, 00:32:30.780 "rw_mbytes_per_sec": 0, 00:32:30.780 "r_mbytes_per_sec": 0, 00:32:30.780 "w_mbytes_per_sec": 0 00:32:30.780 }, 00:32:30.780 "claimed": false, 00:32:30.780 "zoned": false, 00:32:30.780 "supported_io_types": { 00:32:30.780 "read": true, 00:32:30.780 "write": true, 00:32:30.780 "unmap": true, 00:32:30.781 "flush": false, 00:32:30.781 "reset": true, 00:32:30.781 "nvme_admin": false, 00:32:30.781 "nvme_io": false, 00:32:30.781 "nvme_io_md": false, 00:32:30.781 "write_zeroes": true, 00:32:30.781 "zcopy": false, 00:32:30.781 "get_zone_info": false, 00:32:30.781 "zone_management": false, 00:32:30.781 "zone_append": false, 00:32:30.781 "compare": false, 00:32:30.781 "compare_and_write": false, 00:32:30.781 "abort": false, 00:32:30.781 "seek_hole": true, 00:32:30.781 "seek_data": true, 00:32:30.781 "copy": false, 00:32:30.781 "nvme_iov_md": false 00:32:30.781 }, 00:32:30.781 "driver_specific": { 00:32:30.781 "lvol": { 00:32:30.781 "lvol_store_uuid": "16b9732a-79fc-42a6-86b9-7f88dc978d82", 00:32:30.781 "base_bdev": "aio_bdev", 00:32:30.781 "thin_provision": false, 00:32:30.781 "num_allocated_clusters": 38, 00:32:30.781 "snapshot": false, 00:32:30.781 "clone": false, 00:32:30.781 "esnap_clone": false 00:32:30.781 } 00:32:30.781 } 00:32:30.781 } 00:32:30.781 ] 00:32:30.781 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:30.781 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:30.781 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:31.042 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:31.042 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:31.042 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:31.303 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:31.303 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4bdb77b-f6e8-422e-86ab-239c6b4fdb5f 00:32:31.303 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16b9732a-79fc-42a6-86b9-7f88dc978d82 00:32:31.564 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.825 00:32:31.825 real 0m15.828s 00:32:31.825 user 0m15.571s 00:32:31.825 sys 0m1.423s 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:31.825 ************************************ 00:32:31.825 END TEST lvs_grow_clean 00:32:31.825 ************************************ 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:31.825 ************************************ 00:32:31.825 START TEST lvs_grow_dirty 00:32:31.825 ************************************ 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.825 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:32.085 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:32.085 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:32.345 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:32.345 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:32.346 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:32.607 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:32.607 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:32.607 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 lvol 150 00:32:32.607 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=80202fd0-97a5-40bd-9642-429de60d48ed 00:32:32.607 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.607 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:32.867 [2024-11-15 11:12:52.212077] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:32.867 [2024-11-15 11:12:52.212222] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:32.867 true 00:32:32.867 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:32.867 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:33.127 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:33.127 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:33.127 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80202fd0-97a5-40bd-9642-429de60d48ed 00:32:33.387 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.387 [2024-11-15 11:12:52.892651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.387 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=627398 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 627398 /var/tmp/bdevperf.sock 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 627398 ']' 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.647 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:33.647 [2024-11-15 11:12:53.126200] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:32:33.647 [2024-11-15 11:12:53.126255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627398 ] 00:32:33.909 [2024-11-15 11:12:53.212809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.909 [2024-11-15 11:12:53.243506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.479 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.479 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:34.479 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:35.050 Nvme0n1 00:32:35.051 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:35.051 [ 00:32:35.051 { 00:32:35.051 "name": "Nvme0n1", 00:32:35.051 "aliases": [ 00:32:35.051 "80202fd0-97a5-40bd-9642-429de60d48ed" 00:32:35.051 ], 00:32:35.051 "product_name": "NVMe disk", 00:32:35.051 "block_size": 4096, 00:32:35.051 "num_blocks": 38912, 00:32:35.051 "uuid": "80202fd0-97a5-40bd-9642-429de60d48ed", 00:32:35.051 "numa_id": 0, 00:32:35.051 "assigned_rate_limits": { 00:32:35.051 "rw_ios_per_sec": 0, 00:32:35.051 "rw_mbytes_per_sec": 0, 00:32:35.051 "r_mbytes_per_sec": 0, 00:32:35.051 "w_mbytes_per_sec": 0 00:32:35.051 }, 00:32:35.051 "claimed": false, 00:32:35.051 "zoned": false, 00:32:35.051 "supported_io_types": { 00:32:35.051 "read": true, 00:32:35.051 "write": true, 00:32:35.051 "unmap": true, 00:32:35.051 "flush": true, 00:32:35.051 "reset": true, 00:32:35.051 "nvme_admin": true, 00:32:35.051 "nvme_io": true, 00:32:35.051 "nvme_io_md": false, 00:32:35.051 "write_zeroes": true, 00:32:35.051 "zcopy": false, 00:32:35.051 "get_zone_info": false, 00:32:35.051 "zone_management": false, 00:32:35.051 "zone_append": false, 00:32:35.051 "compare": true, 00:32:35.051 "compare_and_write": true, 00:32:35.051 "abort": true, 00:32:35.051 "seek_hole": false, 00:32:35.051 "seek_data": false, 00:32:35.051 "copy": true, 00:32:35.051 "nvme_iov_md": false 00:32:35.051 }, 00:32:35.051 "memory_domains": [ 00:32:35.051 { 00:32:35.051 "dma_device_id": "system", 00:32:35.051 "dma_device_type": 1 00:32:35.051 } 00:32:35.051 ], 00:32:35.051 "driver_specific": { 00:32:35.051 "nvme": [ 00:32:35.051 { 00:32:35.051 "trid": { 00:32:35.051 "trtype": "TCP", 00:32:35.051 "adrfam": "IPv4", 00:32:35.051 "traddr": "10.0.0.2", 00:32:35.051 "trsvcid": "4420", 00:32:35.051 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:35.051 }, 00:32:35.051 "ctrlr_data": { 00:32:35.051 "cntlid": 1, 00:32:35.051 "vendor_id": "0x8086", 00:32:35.051 "model_number": "SPDK bdev Controller", 00:32:35.051 "serial_number": "SPDK0", 00:32:35.051 "firmware_revision": "25.01", 00:32:35.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.051 "oacs": { 00:32:35.051 "security": 0, 00:32:35.051 "format": 0, 00:32:35.051 "firmware": 0, 00:32:35.051 "ns_manage": 0 00:32:35.051 }, 00:32:35.051 "multi_ctrlr": true, 00:32:35.051 "ana_reporting": false 00:32:35.051 }, 00:32:35.051 "vs": { 00:32:35.051 "nvme_version": "1.3" 00:32:35.051 }, 00:32:35.051 "ns_data": { 00:32:35.051 "id": 1, 00:32:35.051 "can_share": true 00:32:35.051 } 00:32:35.051 } 00:32:35.051 ], 00:32:35.051 "mp_policy": "active_passive" 00:32:35.051 } 00:32:35.051 } 00:32:35.051 ] 00:32:35.051 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=627728 00:32:35.051 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:35.051 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.051 Running I/O for 10 seconds... 00:32:36.436 Latency(us) 00:32:36.436 [2024-11-15T10:12:55.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.436 Nvme0n1 : 1.00 17409.00 68.00 0.00 0.00 0.00 0.00 0.00 00:32:36.436 [2024-11-15T10:12:55.963Z] =================================================================================================================== 00:32:36.436 [2024-11-15T10:12:55.963Z] Total : 17409.00 68.00 0.00 0.00 0.00 0.00 0.00 00:32:36.436 00:32:37.007 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:37.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.267 Nvme0n1 : 2.00 17707.00 69.17 0.00 0.00 0.00 0.00 0.00 00:32:37.267 [2024-11-15T10:12:56.794Z] =================================================================================================================== 00:32:37.267 [2024-11-15T10:12:56.794Z] Total : 17707.00 69.17 0.00 0.00 0.00 0.00 0.00 00:32:37.267 00:32:37.267 true 00:32:37.267 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:37.267 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:37.528 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:37.528 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:37.528 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 627728 00:32:38.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.099 Nvme0n1 : 3.00 17783.33 69.47 0.00 0.00 0.00 0.00 0.00 00:32:38.099 [2024-11-15T10:12:57.626Z] =================================================================================================================== 00:32:38.099 [2024-11-15T10:12:57.626Z] Total : 17783.33 69.47 0.00 0.00 0.00 0.00 0.00 00:32:38.099 00:32:39.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.484 Nvme0n1 : 4.00 17846.00 69.71 0.00 0.00 0.00 0.00 0.00 00:32:39.484 [2024-11-15T10:12:59.011Z] =================================================================================================================== 00:32:39.484 [2024-11-15T10:12:59.011Z] Total : 17846.00 69.71 0.00 0.00 0.00 0.00 0.00 00:32:39.484 00:32:40.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.055 Nvme0n1 : 5.00 18848.80 73.63 0.00 0.00 0.00 0.00 0.00 00:32:40.055 [2024-11-15T10:12:59.582Z] =================================================================================================================== 00:32:40.055 [2024-11-15T10:12:59.582Z] Total : 18848.80 73.63 0.00 0.00 0.00 0.00 0.00 00:32:40.055 00:32:41.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.437 Nvme0n1 : 6.00 19925.17 77.83 0.00 0.00 0.00 0.00 0.00 00:32:41.437 [2024-11-15T10:13:00.964Z] =================================================================================================================== 00:32:41.437 [2024-11-15T10:13:00.964Z] Total : 19925.17 77.83 0.00 0.00 0.00 0.00 0.00 00:32:41.437 00:32:42.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.377 Nvme0n1 : 7.00 20707.29 80.89 0.00 0.00 0.00 0.00 0.00 00:32:42.377 [2024-11-15T10:13:01.904Z] =================================================================================================================== 00:32:42.377 [2024-11-15T10:13:01.904Z] Total : 20707.29 80.89 0.00 0.00 0.00 0.00 0.00 00:32:42.377 00:32:43.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.319 Nvme0n1 : 8.00 21301.88 83.21 0.00 0.00 0.00 0.00 0.00 00:32:43.319 [2024-11-15T10:13:02.846Z] =================================================================================================================== 00:32:43.319 [2024-11-15T10:13:02.846Z] Total : 21301.88 83.21 0.00 0.00 0.00 0.00 0.00 00:32:43.319 00:32:44.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.258 Nvme0n1 : 9.00 21769.67 85.04 0.00 0.00 0.00 0.00 0.00 00:32:44.258 [2024-11-15T10:13:03.785Z] =================================================================================================================== 00:32:44.258 [2024-11-15T10:13:03.785Z] Total : 21769.67 85.04 0.00 0.00 0.00 0.00 0.00 00:32:44.258 00:32:45.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.199 Nvme0n1 : 10.00 22132.80 86.46 0.00 0.00 0.00 0.00 0.00 00:32:45.199 [2024-11-15T10:13:04.726Z] =================================================================================================================== 00:32:45.199 [2024-11-15T10:13:04.726Z] Total : 22132.80 86.46 0.00 0.00 0.00 0.00 0.00 00:32:45.199 00:32:45.199 00:32:45.199 Latency(us) 00:32:45.199 [2024-11-15T10:13:04.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.199 Nvme0n1 : 10.00 22136.07 86.47 0.00 0.00 5779.63 2416.64 28398.93 00:32:45.199 [2024-11-15T10:13:04.726Z] =================================================================================================================== 00:32:45.199 [2024-11-15T10:13:04.726Z] Total : 22136.07 86.47 0.00 0.00 5779.63 2416.64 28398.93 00:32:45.199 { 00:32:45.199 "results": [ 00:32:45.199 { 00:32:45.199 "job": "Nvme0n1", 00:32:45.199 "core_mask": "0x2", 00:32:45.199 "workload": "randwrite", 00:32:45.199 "status": "finished", 00:32:45.199 "queue_depth": 128, 00:32:45.199 "io_size": 4096, 00:32:45.199 "runtime": 10.004306, 00:32:45.199 "iops": 22136.068209029192, 00:32:45.199 "mibps": 86.46901644152028, 00:32:45.199 "io_failed": 0, 00:32:45.199 "io_timeout": 0, 00:32:45.199 "avg_latency_us": 5779.626327577487, 00:32:45.199 "min_latency_us": 2416.64, 00:32:45.199 "max_latency_us": 28398.933333333334 00:32:45.199 } 00:32:45.199 ], 00:32:45.199 "core_count": 1 00:32:45.199 } 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 627398 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 627398 ']' 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 627398 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 627398 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 627398' 00:32:45.199 killing process with pid 627398 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 627398 00:32:45.199 Received shutdown signal, test time was about 10.000000 seconds 00:32:45.199 00:32:45.199 Latency(us) 00:32:45.199 [2024-11-15T10:13:04.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.199 [2024-11-15T10:13:04.726Z] =================================================================================================================== 00:32:45.199 [2024-11-15T10:13:04.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.199 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 627398 00:32:45.459 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.459 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.719 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:45.719 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 623662 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 623662 00:32:45.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 623662 Killed "${NVMF_APP[@]}" "$@" 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=629747 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 629747 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 629747 ']' 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.980 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.980 [2024-11-15 11:13:05.411667] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.980 [2024-11-15 11:13:05.412643] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:32:45.980 [2024-11-15 11:13:05.412685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.980 [2024-11-15 11:13:05.504068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.241 [2024-11-15 11:13:05.534784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.241 [2024-11-15 11:13:05.534814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.241 [2024-11-15 11:13:05.534819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.241 [2024-11-15 11:13:05.534824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.241 [2024-11-15 11:13:05.534832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.241 [2024-11-15 11:13:05.535296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.241 [2024-11-15 11:13:05.587016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.241 [2024-11-15 11:13:05.587212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.813 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:47.073 [2024-11-15 11:13:06.441858] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:47.073 [2024-11-15 11:13:06.442107] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:47.073 [2024-11-15 11:13:06.442199] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 80202fd0-97a5-40bd-9642-429de60d48ed 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=80202fd0-97a5-40bd-9642-429de60d48ed 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:47.073 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:47.334 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80202fd0-97a5-40bd-9642-429de60d48ed -t 2000 00:32:47.334 [ 00:32:47.334 { 00:32:47.334 "name": "80202fd0-97a5-40bd-9642-429de60d48ed", 00:32:47.334 "aliases": [ 00:32:47.334 "lvs/lvol" 00:32:47.334 ], 00:32:47.334 "product_name": "Logical Volume", 00:32:47.334 "block_size": 4096, 00:32:47.334 "num_blocks": 38912, 00:32:47.334 "uuid": "80202fd0-97a5-40bd-9642-429de60d48ed", 00:32:47.334 "assigned_rate_limits": { 00:32:47.334 "rw_ios_per_sec": 0, 00:32:47.334 "rw_mbytes_per_sec": 0, 00:32:47.334 "r_mbytes_per_sec": 0, 00:32:47.334 "w_mbytes_per_sec": 0 00:32:47.334 }, 00:32:47.334 "claimed": false, 00:32:47.334 "zoned": false, 00:32:47.334 "supported_io_types": { 00:32:47.334 "read": true, 00:32:47.334 "write": true, 00:32:47.334 "unmap": true, 00:32:47.334 "flush": false, 00:32:47.334 "reset": true, 00:32:47.334 "nvme_admin": false, 00:32:47.334 "nvme_io": false, 00:32:47.334 "nvme_io_md": false, 00:32:47.334 "write_zeroes": true, 00:32:47.334 "zcopy": false, 00:32:47.334 "get_zone_info": false, 00:32:47.334 "zone_management": false, 00:32:47.334 "zone_append": false, 00:32:47.334 "compare": false, 00:32:47.334 "compare_and_write": false, 00:32:47.334 "abort": false, 00:32:47.334 "seek_hole": true, 00:32:47.334 "seek_data": true, 00:32:47.334 "copy": false, 00:32:47.334 "nvme_iov_md": false 00:32:47.334 }, 00:32:47.334 "driver_specific": { 00:32:47.334 "lvol": { 00:32:47.334 "lvol_store_uuid": "b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4", 00:32:47.334 "base_bdev": "aio_bdev", 00:32:47.334 "thin_provision": false, 00:32:47.334 "num_allocated_clusters": 38, 00:32:47.334 "snapshot": false, 00:32:47.334 "clone": false, 00:32:47.334 "esnap_clone": false 00:32:47.334 } 00:32:47.334 } 00:32:47.334 } 00:32:47.334 ] 00:32:47.334 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:47.335 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:47.335 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:47.595 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:47.595 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:47.595 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:47.855 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:47.855 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:48.116 [2024-11-15 11:13:07.383851] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:48.116 request: 00:32:48.116 { 00:32:48.116 "uuid": "b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4", 00:32:48.116 "method": "bdev_lvol_get_lvstores", 00:32:48.116 "req_id": 1 00:32:48.116 } 00:32:48.116 Got JSON-RPC error response 00:32:48.116 response: 00:32:48.116 { 00:32:48.116 "code": -19, 00:32:48.116 "message": "No such device" 00:32:48.116 } 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:48.116 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:48.376 aio_bdev 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80202fd0-97a5-40bd-9642-429de60d48ed 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=80202fd0-97a5-40bd-9642-429de60d48ed 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:48.376 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:48.637 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80202fd0-97a5-40bd-9642-429de60d48ed -t 2000 00:32:48.637 [ 00:32:48.637 { 00:32:48.637 "name": "80202fd0-97a5-40bd-9642-429de60d48ed", 00:32:48.637 "aliases": [ 00:32:48.637 "lvs/lvol" 00:32:48.637 ], 00:32:48.637 "product_name": "Logical Volume", 00:32:48.637 "block_size": 4096, 00:32:48.637 "num_blocks": 38912, 00:32:48.637 "uuid": "80202fd0-97a5-40bd-9642-429de60d48ed", 00:32:48.637 "assigned_rate_limits": { 00:32:48.637 "rw_ios_per_sec": 0, 00:32:48.637 "rw_mbytes_per_sec": 0, 00:32:48.637 "r_mbytes_per_sec": 0, 00:32:48.637 "w_mbytes_per_sec": 0 00:32:48.637 }, 00:32:48.637 "claimed": false, 00:32:48.637 "zoned": false, 00:32:48.637 "supported_io_types": { 00:32:48.637 "read": true, 00:32:48.637 "write": true, 00:32:48.637 "unmap": true, 00:32:48.637 "flush": false, 00:32:48.637 "reset": true, 00:32:48.637 "nvme_admin": false, 00:32:48.637 "nvme_io": false, 00:32:48.637 "nvme_io_md": false, 00:32:48.637 "write_zeroes": true, 00:32:48.637 "zcopy": false, 00:32:48.637 "get_zone_info": false, 00:32:48.637 "zone_management": false, 00:32:48.637 "zone_append": false, 00:32:48.637 "compare": false, 00:32:48.637 "compare_and_write": false, 00:32:48.637 "abort": false, 00:32:48.637 "seek_hole": true, 00:32:48.637 "seek_data": true, 00:32:48.637 "copy": false, 00:32:48.637 "nvme_iov_md": false 00:32:48.637 }, 00:32:48.637 "driver_specific": { 00:32:48.637 "lvol": { 00:32:48.637 "lvol_store_uuid": "b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4", 00:32:48.637 "base_bdev": "aio_bdev", 00:32:48.637 "thin_provision": false, 00:32:48.637 "num_allocated_clusters": 38, 00:32:48.637 "snapshot": false, 00:32:48.637 "clone": false, 00:32:48.637 "esnap_clone": false 00:32:48.637 } 00:32:48.637 } 00:32:48.637 } 00:32:48.637 ] 00:32:48.637 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:48.637 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:48.637 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:48.897 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:48.898 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:48.898 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:49.158 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:49.158 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80202fd0-97a5-40bd-9642-429de60d48ed 00:32:49.158 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9499c18-f6c1-462a-bf2d-d1ad1a00ebc4 00:32:49.418 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:49.680 00:32:49.680 real 0m17.776s 00:32:49.680 user 0m35.576s 00:32:49.680 sys 0m3.193s 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:49.680 ************************************ 00:32:49.680 END TEST lvs_grow_dirty 00:32:49.680 ************************************ 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:49.680 nvmf_trace.0 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.680 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.680 rmmod nvme_tcp 00:32:49.680 rmmod nvme_fabrics 00:32:49.680 rmmod nvme_keyring 00:32:49.941 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.941 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:49.941 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:49.941 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 629747 ']' 00:32:49.941 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 629747 00:32:49.941 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 629747 ']' 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 629747 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 629747 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 629747' 00:32:49.942 killing process with pid 629747 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 629747 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 629747 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.942 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.490 00:32:52.490 real 0m45.021s 00:32:52.490 user 0m53.953s 00:32:52.490 sys 0m10.946s 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:52.490 ************************************ 00:32:52.490 END TEST nvmf_lvs_grow 00:32:52.490 ************************************ 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:52.490 ************************************ 00:32:52.490 START TEST nvmf_bdev_io_wait 00:32:52.490 ************************************ 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:52.490 * Looking for test storage... 00:32:52.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.490 --rc genhtml_branch_coverage=1 00:32:52.490 --rc genhtml_function_coverage=1 00:32:52.490 --rc genhtml_legend=1 00:32:52.490 --rc geninfo_all_blocks=1 00:32:52.490 --rc geninfo_unexecuted_blocks=1 00:32:52.490 00:32:52.490 ' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.490 --rc genhtml_branch_coverage=1 00:32:52.490 --rc genhtml_function_coverage=1 00:32:52.490 --rc genhtml_legend=1 00:32:52.490 --rc geninfo_all_blocks=1 00:32:52.490 --rc geninfo_unexecuted_blocks=1 00:32:52.490 00:32:52.490 ' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.490 --rc genhtml_branch_coverage=1 00:32:52.490 --rc genhtml_function_coverage=1 00:32:52.490 --rc genhtml_legend=1 00:32:52.490 --rc geninfo_all_blocks=1 00:32:52.490 --rc geninfo_unexecuted_blocks=1 00:32:52.490 00:32:52.490 ' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.490 --rc genhtml_branch_coverage=1 00:32:52.490 --rc genhtml_function_coverage=1 00:32:52.490 --rc genhtml_legend=1 00:32:52.490 --rc geninfo_all_blocks=1 00:32:52.490 --rc geninfo_unexecuted_blocks=1 00:32:52.490 00:32:52.490 ' 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.490 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.491 11:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:00.636 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.636 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:00.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:00.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:00.637 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.637 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:33:00.637 00:33:00.637 --- 10.0.0.2 ping statistics --- 00:33:00.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.637 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:33:00.637 00:33:00.637 --- 10.0.0.1 ping statistics --- 00:33:00.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.637 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=634708 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 634708 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 634708 ']' 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:00.637 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.638 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:00.638 11:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.638 [2024-11-15 11:13:19.388583] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.638 [2024-11-15 11:13:19.389727] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:00.638 [2024-11-15 11:13:19.389777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.638 [2024-11-15 11:13:19.489056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.638 [2024-11-15 11:13:19.544617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.638 [2024-11-15 11:13:19.544668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.638 [2024-11-15 11:13:19.544678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.638 [2024-11-15 11:13:19.544687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.638 [2024-11-15 11:13:19.544693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.638 [2024-11-15 11:13:19.546723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.638 [2024-11-15 11:13:19.546886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.638 [2024-11-15 11:13:19.547030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.638 [2024-11-15 11:13:19.547031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.638 [2024-11-15 11:13:19.547384] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 [2024-11-15 11:13:20.335945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.900 [2024-11-15 11:13:20.336427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:00.900 [2024-11-15 11:13:20.336451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:00.900 [2024-11-15 11:13:20.336621] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 [2024-11-15 11:13:20.347930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 Malloc0 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.900 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.900 [2024-11-15 11:13:20.424283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=634837 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=634839 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.163 { 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme$subsystem", 00:33:01.163 "trtype": "$TEST_TRANSPORT", 00:33:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "$NVMF_PORT", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.163 "hdgst": ${hdgst:-false}, 00:33:01.163 "ddgst": ${ddgst:-false} 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 } 00:33:01.163 EOF 00:33:01.163 )") 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=634841 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.163 { 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme$subsystem", 00:33:01.163 "trtype": "$TEST_TRANSPORT", 00:33:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "$NVMF_PORT", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.163 "hdgst": ${hdgst:-false}, 00:33:01.163 "ddgst": ${ddgst:-false} 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 } 00:33:01.163 EOF 00:33:01.163 )") 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=634844 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.163 { 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme$subsystem", 00:33:01.163 "trtype": "$TEST_TRANSPORT", 00:33:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "$NVMF_PORT", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.163 "hdgst": ${hdgst:-false}, 00:33:01.163 "ddgst": ${ddgst:-false} 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 } 00:33:01.163 EOF 00:33:01.163 )") 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.163 { 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme$subsystem", 00:33:01.163 "trtype": "$TEST_TRANSPORT", 00:33:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "$NVMF_PORT", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.163 "hdgst": ${hdgst:-false}, 00:33:01.163 "ddgst": ${ddgst:-false} 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 } 00:33:01.163 EOF 00:33:01.163 )") 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 634837 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme1", 00:33:01.163 "trtype": "tcp", 00:33:01.163 "traddr": "10.0.0.2", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "4420", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.163 "hdgst": false, 00:33:01.163 "ddgst": false 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 }' 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme1", 00:33:01.163 "trtype": "tcp", 00:33:01.163 "traddr": "10.0.0.2", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "4420", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.163 "hdgst": false, 00:33:01.163 "ddgst": false 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 }' 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme1", 00:33:01.163 "trtype": "tcp", 00:33:01.163 "traddr": "10.0.0.2", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "4420", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.163 "hdgst": false, 00:33:01.163 "ddgst": false 00:33:01.163 }, 00:33:01.163 "method": "bdev_nvme_attach_controller" 00:33:01.163 }' 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:01.163 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.163 "params": { 00:33:01.163 "name": "Nvme1", 00:33:01.163 "trtype": "tcp", 00:33:01.163 "traddr": "10.0.0.2", 00:33:01.163 "adrfam": "ipv4", 00:33:01.163 "trsvcid": "4420", 00:33:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.163 "hdgst": false, 00:33:01.163 "ddgst": false 00:33:01.164 }, 00:33:01.164 "method": "bdev_nvme_attach_controller" 00:33:01.164 }' 00:33:01.164 [2024-11-15 11:13:20.484355] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:01.164 [2024-11-15 11:13:20.484430] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:01.164 [2024-11-15 11:13:20.485400] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:01.164 [2024-11-15 11:13:20.485465] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:01.164 [2024-11-15 11:13:20.486260] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:01.164 [2024-11-15 11:13:20.486332] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:01.164 [2024-11-15 11:13:20.488112] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:01.164 [2024-11-15 11:13:20.488178] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:01.425 [2024-11-15 11:13:20.708369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.425 [2024-11-15 11:13:20.749310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:01.425 [2024-11-15 11:13:20.800046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.425 [2024-11-15 11:13:20.842408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:01.425 [2024-11-15 11:13:20.865991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.425 [2024-11-15 11:13:20.903324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:01.425 [2024-11-15 11:13:20.934139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.686 [2024-11-15 11:13:20.972767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:01.686 Running I/O for 1 seconds... 00:33:01.686 Running I/O for 1 seconds... 00:33:01.686 Running I/O for 1 seconds... 00:33:01.686 Running I/O for 1 seconds... 00:33:02.631 10633.00 IOPS, 41.54 MiB/s 00:33:02.631 Latency(us) 00:33:02.631 [2024-11-15T10:13:22.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.631 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:02.631 Nvme1n1 : 1.01 10677.81 41.71 0.00 0.00 11939.50 5570.56 14199.47 00:33:02.631 [2024-11-15T10:13:22.158Z] =================================================================================================================== 00:33:02.631 [2024-11-15T10:13:22.158Z] Total : 10677.81 41.71 0.00 0.00 11939.50 5570.56 14199.47 00:33:02.631 10875.00 IOPS, 42.48 MiB/s 00:33:02.631 Latency(us) 00:33:02.631 [2024-11-15T10:13:22.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.631 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:02.631 Nvme1n1 : 1.01 10950.68 42.78 0.00 0.00 11650.21 5079.04 16165.55 00:33:02.631 [2024-11-15T10:13:22.158Z] =================================================================================================================== 00:33:02.631 [2024-11-15T10:13:22.158Z] Total : 10950.68 42.78 0.00 0.00 11650.21 5079.04 16165.55 00:33:02.892 9984.00 IOPS, 39.00 MiB/s 00:33:02.892 Latency(us) 00:33:02.892 [2024-11-15T10:13:22.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.892 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:02.892 Nvme1n1 : 1.01 10071.79 39.34 0.00 0.00 12668.14 2375.68 20753.07 00:33:02.892 [2024-11-15T10:13:22.419Z] =================================================================================================================== 00:33:02.892 [2024-11-15T10:13:22.419Z] Total : 10071.79 39.34 0.00 0.00 12668.14 2375.68 20753.07 00:33:02.892 186832.00 IOPS, 729.81 MiB/s 00:33:02.892 Latency(us) 00:33:02.892 [2024-11-15T10:13:22.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.892 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:02.892 Nvme1n1 : 1.00 186457.58 728.35 0.00 0.00 682.41 317.44 2007.04 00:33:02.892 [2024-11-15T10:13:22.419Z] =================================================================================================================== 00:33:02.892 [2024-11-15T10:13:22.419Z] Total : 186457.58 728.35 0.00 0.00 682.41 317.44 2007.04 00:33:02.892 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 634839 00:33:02.892 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 634841 00:33:02.892 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 634844 00:33:02.892 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.892 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.893 rmmod nvme_tcp 00:33:02.893 rmmod nvme_fabrics 00:33:02.893 rmmod nvme_keyring 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 634708 ']' 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 634708 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 634708 ']' 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 634708 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:02.893 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 634708 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 634708' 00:33:03.154 killing process with pid 634708 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 634708 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 634708 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.154 11:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.702 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.702 00:33:05.702 real 0m13.094s 00:33:05.702 user 0m15.883s 00:33:05.702 sys 0m7.766s 00:33:05.702 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.703 ************************************ 00:33:05.703 END TEST nvmf_bdev_io_wait 00:33:05.703 ************************************ 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.703 ************************************ 00:33:05.703 START TEST nvmf_queue_depth 00:33:05.703 ************************************ 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:05.703 * Looking for test storage... 00:33:05.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:05.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.703 --rc genhtml_branch_coverage=1 00:33:05.703 --rc genhtml_function_coverage=1 00:33:05.703 --rc genhtml_legend=1 00:33:05.703 --rc geninfo_all_blocks=1 00:33:05.703 --rc geninfo_unexecuted_blocks=1 00:33:05.703 00:33:05.703 ' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:05.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.703 --rc genhtml_branch_coverage=1 00:33:05.703 --rc genhtml_function_coverage=1 00:33:05.703 --rc genhtml_legend=1 00:33:05.703 --rc geninfo_all_blocks=1 00:33:05.703 --rc geninfo_unexecuted_blocks=1 00:33:05.703 00:33:05.703 ' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:05.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.703 --rc genhtml_branch_coverage=1 00:33:05.703 --rc genhtml_function_coverage=1 00:33:05.703 --rc genhtml_legend=1 00:33:05.703 --rc geninfo_all_blocks=1 00:33:05.703 --rc geninfo_unexecuted_blocks=1 00:33:05.703 00:33:05.703 ' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:05.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.703 --rc genhtml_branch_coverage=1 00:33:05.703 --rc genhtml_function_coverage=1 00:33:05.703 --rc genhtml_legend=1 00:33:05.703 --rc geninfo_all_blocks=1 00:33:05.703 --rc geninfo_unexecuted_blocks=1 00:33:05.703 00:33:05.703 ' 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.703 11:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.703 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.704 11:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:13.846 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:13.846 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:13.846 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:13.846 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.846 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:33:13.847 00:33:13.847 --- 10.0.0.2 ping statistics --- 00:33:13.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.847 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:33:13.847 00:33:13.847 --- 10.0.0.1 ping statistics --- 00:33:13.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.847 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=639514 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 639514 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 639514 ']' 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:13.847 11:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.847 [2024-11-15 11:13:32.613099] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:13.847 [2024-11-15 11:13:32.614270] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:13.847 [2024-11-15 11:13:32.614323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.847 [2024-11-15 11:13:32.717094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.847 [2024-11-15 11:13:32.768420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.847 [2024-11-15 11:13:32.768467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.847 [2024-11-15 11:13:32.768476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.847 [2024-11-15 11:13:32.768482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.847 [2024-11-15 11:13:32.768489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.847 [2024-11-15 11:13:32.769252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.847 [2024-11-15 11:13:32.847184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:13.847 [2024-11-15 11:13:32.847475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 [2024-11-15 11:13:33.466125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 Malloc0 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 [2024-11-15 11:13:33.554320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=639616 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 639616 /var/tmp/bdevperf.sock 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 639616 ']' 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:14.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:14.110 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.110 [2024-11-15 11:13:33.623079] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:14.110 [2024-11-15 11:13:33.623140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639616 ] 00:33:14.371 [2024-11-15 11:13:33.715779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.371 [2024-11-15 11:13:33.769272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.943 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:14.943 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:14.943 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:14.943 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.943 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:15.204 NVMe0n1 00:33:15.204 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.204 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:15.465 Running I/O for 10 seconds... 00:33:17.348 8192.00 IOPS, 32.00 MiB/s [2024-11-15T10:13:37.816Z] 8442.00 IOPS, 32.98 MiB/s [2024-11-15T10:13:39.198Z] 9218.67 IOPS, 36.01 MiB/s [2024-11-15T10:13:40.138Z] 10239.25 IOPS, 40.00 MiB/s [2024-11-15T10:13:41.078Z] 10851.60 IOPS, 42.39 MiB/s [2024-11-15T10:13:42.018Z] 11261.50 IOPS, 43.99 MiB/s [2024-11-15T10:13:42.961Z] 11565.57 IOPS, 45.18 MiB/s [2024-11-15T10:13:43.902Z] 11818.75 IOPS, 46.17 MiB/s [2024-11-15T10:13:44.848Z] 12032.00 IOPS, 47.00 MiB/s [2024-11-15T10:13:44.848Z] 12190.40 IOPS, 47.62 MiB/s 00:33:25.321 Latency(us) 00:33:25.321 [2024-11-15T10:13:44.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.321 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:25.321 Verification LBA range: start 0x0 length 0x4000 00:33:25.321 NVMe0n1 : 10.05 12225.27 47.75 0.00 0.00 83489.81 19660.80 80390.83 00:33:25.321 [2024-11-15T10:13:44.848Z] =================================================================================================================== 00:33:25.321 [2024-11-15T10:13:44.848Z] Total : 12225.27 47.75 0.00 0.00 83489.81 19660.80 80390.83 00:33:25.321 { 00:33:25.321 "results": [ 00:33:25.321 { 00:33:25.321 "job": "NVMe0n1", 00:33:25.321 "core_mask": "0x1", 00:33:25.321 "workload": "verify", 00:33:25.321 "status": "finished", 00:33:25.321 "verify_range": { 00:33:25.321 "start": 0, 00:33:25.321 "length": 16384 00:33:25.321 }, 00:33:25.321 "queue_depth": 1024, 00:33:25.321 "io_size": 4096, 00:33:25.321 "runtime": 10.054584, 00:33:25.321 "iops": 12225.26958847825, 00:33:25.321 "mibps": 47.75495932999316, 00:33:25.321 "io_failed": 0, 00:33:25.321 "io_timeout": 0, 00:33:25.321 "avg_latency_us": 83489.80632866904, 00:33:25.321 "min_latency_us": 19660.8, 00:33:25.321 "max_latency_us": 80390.82666666666 00:33:25.321 } 00:33:25.321 ], 00:33:25.321 "core_count": 1 00:33:25.321 } 00:33:25.608 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 639616 00:33:25.608 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 639616 ']' 00:33:25.608 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 639616 00:33:25.608 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 639616 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 639616' 00:33:25.609 killing process with pid 639616 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 639616 00:33:25.609 Received shutdown signal, test time was about 10.000000 seconds 00:33:25.609 00:33:25.609 Latency(us) 00:33:25.609 [2024-11-15T10:13:45.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.609 [2024-11-15T10:13:45.136Z] =================================================================================================================== 00:33:25.609 [2024-11-15T10:13:45.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.609 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 639616 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.609 rmmod nvme_tcp 00:33:25.609 rmmod nvme_fabrics 00:33:25.609 rmmod nvme_keyring 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 639514 ']' 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 639514 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 639514 ']' 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 639514 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:25.609 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 639514 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 639514' 00:33:25.893 killing process with pid 639514 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 639514 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 639514 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.893 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.871 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.871 00:33:27.871 real 0m22.584s 00:33:27.871 user 0m24.760s 00:33:27.871 sys 0m7.572s 00:33:27.871 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:27.871 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.871 ************************************ 00:33:27.871 END TEST nvmf_queue_depth 00:33:27.871 ************************************ 00:33:28.133 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:28.133 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:28.133 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:28.133 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:28.133 ************************************ 00:33:28.134 START TEST nvmf_target_multipath 00:33:28.134 ************************************ 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:28.134 * Looking for test storage... 00:33:28.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.134 --rc genhtml_branch_coverage=1 00:33:28.134 --rc genhtml_function_coverage=1 00:33:28.134 --rc genhtml_legend=1 00:33:28.134 --rc geninfo_all_blocks=1 00:33:28.134 --rc geninfo_unexecuted_blocks=1 00:33:28.134 00:33:28.134 ' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.134 --rc genhtml_branch_coverage=1 00:33:28.134 --rc genhtml_function_coverage=1 00:33:28.134 --rc genhtml_legend=1 00:33:28.134 --rc geninfo_all_blocks=1 00:33:28.134 --rc geninfo_unexecuted_blocks=1 00:33:28.134 00:33:28.134 ' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.134 --rc genhtml_branch_coverage=1 00:33:28.134 --rc genhtml_function_coverage=1 00:33:28.134 --rc genhtml_legend=1 00:33:28.134 --rc geninfo_all_blocks=1 00:33:28.134 --rc geninfo_unexecuted_blocks=1 00:33:28.134 00:33:28.134 ' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.134 --rc genhtml_branch_coverage=1 00:33:28.134 --rc genhtml_function_coverage=1 00:33:28.134 --rc genhtml_legend=1 00:33:28.134 --rc geninfo_all_blocks=1 00:33:28.134 --rc geninfo_unexecuted_blocks=1 00:33:28.134 00:33:28.134 ' 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.134 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:28.396 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.397 11:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.530 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:36.531 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:36.531 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:36.531 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:36.531 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.531 11:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:33:36.531 00:33:36.531 --- 10.0.0.2 ping statistics --- 00:33:36.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.531 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:33:36.531 00:33:36.531 --- 10.0.0.1 ping statistics --- 00:33:36.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.531 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.531 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:36.532 only one NIC for nvmf test 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.532 rmmod nvme_tcp 00:33:36.532 rmmod nvme_fabrics 00:33:36.532 rmmod nvme_keyring 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.532 11:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.913 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.913 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:37.913 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.914 00:33:37.914 real 0m9.854s 00:33:37.914 user 0m2.183s 00:33:37.914 sys 0m5.627s 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:37.914 ************************************ 00:33:37.914 END TEST nvmf_target_multipath 00:33:37.914 ************************************ 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:37.914 ************************************ 00:33:37.914 START TEST nvmf_zcopy 00:33:37.914 ************************************ 00:33:37.914 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:38.175 * Looking for test storage... 00:33:38.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:38.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.175 --rc genhtml_branch_coverage=1 00:33:38.175 --rc genhtml_function_coverage=1 00:33:38.175 --rc genhtml_legend=1 00:33:38.175 --rc geninfo_all_blocks=1 00:33:38.175 --rc geninfo_unexecuted_blocks=1 00:33:38.175 00:33:38.175 ' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:38.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.175 --rc genhtml_branch_coverage=1 00:33:38.175 --rc genhtml_function_coverage=1 00:33:38.175 --rc genhtml_legend=1 00:33:38.175 --rc geninfo_all_blocks=1 00:33:38.175 --rc geninfo_unexecuted_blocks=1 00:33:38.175 00:33:38.175 ' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:38.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.175 --rc genhtml_branch_coverage=1 00:33:38.175 --rc genhtml_function_coverage=1 00:33:38.175 --rc genhtml_legend=1 00:33:38.175 --rc geninfo_all_blocks=1 00:33:38.175 --rc geninfo_unexecuted_blocks=1 00:33:38.175 00:33:38.175 ' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:38.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.175 --rc genhtml_branch_coverage=1 00:33:38.175 --rc genhtml_function_coverage=1 00:33:38.175 --rc genhtml_legend=1 00:33:38.175 --rc geninfo_all_blocks=1 00:33:38.175 --rc geninfo_unexecuted_blocks=1 00:33:38.175 00:33:38.175 ' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.175 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.176 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:46.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:46.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:46.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:46.315 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.315 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.316 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:33:46.316 00:33:46.316 --- 10.0.0.2 ping statistics --- 00:33:46.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.316 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:33:46.316 00:33:46.316 --- 10.0.0.1 ping statistics --- 00:33:46.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.316 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=650210 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 650210 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 650210 ']' 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:46.316 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.316 [2024-11-15 11:14:05.147859] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:46.316 [2024-11-15 11:14:05.148967] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:46.316 [2024-11-15 11:14:05.149015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.316 [2024-11-15 11:14:05.247664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.316 [2024-11-15 11:14:05.297849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.316 [2024-11-15 11:14:05.297898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.316 [2024-11-15 11:14:05.297912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.316 [2024-11-15 11:14:05.297919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.316 [2024-11-15 11:14:05.297925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.316 [2024-11-15 11:14:05.298691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.316 [2024-11-15 11:14:05.376196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:46.316 [2024-11-15 11:14:05.376500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:46.578 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:46.578 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:46.578 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.578 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:46.578 11:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 [2024-11-15 11:14:06.007535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 [2024-11-15 11:14:06.035866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 malloc0 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.578 { 00:33:46.578 "params": { 00:33:46.578 "name": "Nvme$subsystem", 00:33:46.578 "trtype": "$TEST_TRANSPORT", 00:33:46.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.578 "adrfam": "ipv4", 00:33:46.578 "trsvcid": "$NVMF_PORT", 00:33:46.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.578 "hdgst": ${hdgst:-false}, 00:33:46.578 "ddgst": ${ddgst:-false} 00:33:46.578 }, 00:33:46.578 "method": "bdev_nvme_attach_controller" 00:33:46.578 } 00:33:46.578 EOF 00:33:46.578 )") 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:46.578 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:46.839 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:46.839 11:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:46.839 "params": { 00:33:46.839 "name": "Nvme1", 00:33:46.839 "trtype": "tcp", 00:33:46.839 "traddr": "10.0.0.2", 00:33:46.839 "adrfam": "ipv4", 00:33:46.839 "trsvcid": "4420", 00:33:46.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.839 "hdgst": false, 00:33:46.839 "ddgst": false 00:33:46.839 }, 00:33:46.839 "method": "bdev_nvme_attach_controller" 00:33:46.839 }' 00:33:46.839 [2024-11-15 11:14:06.154481] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:46.839 [2024-11-15 11:14:06.154545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650348 ] 00:33:46.839 [2024-11-15 11:14:06.246394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.839 [2024-11-15 11:14:06.299667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.410 Running I/O for 10 seconds... 00:33:49.293 6392.00 IOPS, 49.94 MiB/s [2024-11-15T10:14:09.760Z] 6442.50 IOPS, 50.33 MiB/s [2024-11-15T10:14:10.699Z] 6502.00 IOPS, 50.80 MiB/s [2024-11-15T10:14:12.083Z] 6639.50 IOPS, 51.87 MiB/s [2024-11-15T10:14:12.654Z] 7250.20 IOPS, 56.64 MiB/s [2024-11-15T10:14:14.038Z] 7653.83 IOPS, 59.80 MiB/s [2024-11-15T10:14:14.978Z] 7939.57 IOPS, 62.03 MiB/s [2024-11-15T10:14:15.919Z] 8154.38 IOPS, 63.71 MiB/s [2024-11-15T10:14:16.860Z] 8322.33 IOPS, 65.02 MiB/s [2024-11-15T10:14:16.860Z] 8455.90 IOPS, 66.06 MiB/s 00:33:57.333 Latency(us) 00:33:57.333 [2024-11-15T10:14:16.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.333 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:57.333 Verification LBA range: start 0x0 length 0x1000 00:33:57.333 Nvme1n1 : 10.01 8460.71 66.10 0.00 0.00 15082.68 2553.17 27415.89 00:33:57.333 [2024-11-15T10:14:16.860Z] =================================================================================================================== 00:33:57.333 [2024-11-15T10:14:16.860Z] Total : 8460.71 66.10 0.00 0.00 15082.68 2553.17 27415.89 00:33:57.333 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=652370 00:33:57.333 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:57.334 { 00:33:57.334 "params": { 00:33:57.334 "name": "Nvme$subsystem", 00:33:57.334 "trtype": "$TEST_TRANSPORT", 00:33:57.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.334 "adrfam": "ipv4", 00:33:57.334 "trsvcid": "$NVMF_PORT", 00:33:57.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.334 "hdgst": ${hdgst:-false}, 00:33:57.334 "ddgst": ${ddgst:-false} 00:33:57.334 }, 00:33:57.334 "method": "bdev_nvme_attach_controller" 00:33:57.334 } 00:33:57.334 EOF 00:33:57.334 )") 00:33:57.334 [2024-11-15 11:14:16.771095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.771122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:57.334 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:57.334 "params": { 00:33:57.334 "name": "Nvme1", 00:33:57.334 "trtype": "tcp", 00:33:57.334 "traddr": "10.0.0.2", 00:33:57.334 "adrfam": "ipv4", 00:33:57.334 "trsvcid": "4420", 00:33:57.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:57.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:57.334 "hdgst": false, 00:33:57.334 "ddgst": false 00:33:57.334 }, 00:33:57.334 "method": "bdev_nvme_attach_controller" 00:33:57.334 }' 00:33:57.334 [2024-11-15 11:14:16.783067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.783075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 [2024-11-15 11:14:16.795065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.795073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 [2024-11-15 11:14:16.807065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.807073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 [2024-11-15 11:14:16.816339] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:33:57.334 [2024-11-15 11:14:16.816387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid652370 ] 00:33:57.334 [2024-11-15 11:14:16.819065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.819072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 [2024-11-15 11:14:16.831065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.831072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 [2024-11-15 11:14:16.843066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.843078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.334 [2024-11-15 11:14:16.855065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.334 [2024-11-15 11:14:16.855072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.867066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.867074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.879065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.879072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.891065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.891072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.899659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.596 [2024-11-15 11:14:16.903064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.903072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.915065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.915074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.927065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.927075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.929500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.596 [2024-11-15 11:14:16.939065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.939073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.951071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.951081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.963067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.963080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.975066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.975076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.987065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.987072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:16.999072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:16.999089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.011067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.011077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.023067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.023077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.035067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.035077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.047064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.047072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.059064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.059076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.071065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.071073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.083065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.083074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.095065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.095072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.107064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.107072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.596 [2024-11-15 11:14:17.119064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.596 [2024-11-15 11:14:17.119072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.131065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.131075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.143064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.143071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.155065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.155072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.167065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.167074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.217016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.217030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.227067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.227078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 Running I/O for 5 seconds... 00:33:57.857 [2024-11-15 11:14:17.242007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.242024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.255077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.255093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.267986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.268002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.282439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.282454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.295718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.295732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.310258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.310273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.323227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.323242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.336310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.336327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.351140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.351155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.364211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.364225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.857 [2024-11-15 11:14:17.378926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.857 [2024-11-15 11:14:17.378941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.117 [2024-11-15 11:14:17.391942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.117 [2024-11-15 11:14:17.391956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.117 [2024-11-15 11:14:17.406033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.406047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.419250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.419265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.431924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.431938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.446468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.446482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.459749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.459764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.474432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.474446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.487601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.487614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.501993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.502007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.514801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.514815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.527393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.527407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.541955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.541969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.554803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.554818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.568259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.568273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.582224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.582239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.595283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.595305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.608142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.608156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.622167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.622181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.118 [2024-11-15 11:14:17.635204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.118 [2024-11-15 11:14:17.635219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.647901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.647915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.662486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.662500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.675581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.675595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.690329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.690344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.703227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.703241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.716101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.716115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.729945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.729960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.743407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.743421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.757700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.757714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.770974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.770989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.783917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.783931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.798277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.798292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.811429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.811442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.825791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.825805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.838581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.838595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.851706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.851720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.866241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.866255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.879467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.879481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.379 [2024-11-15 11:14:17.894654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.379 [2024-11-15 11:14:17.894668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.907294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.907309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.919691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.919705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.934317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.934331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.947308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.947323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.960084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.960099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.974244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.974259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:17.987230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:17.987244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.000202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.000216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.014386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.014400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.027320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.027334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.040015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.040029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.054418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.054432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.067419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.067433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.082121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.082136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.095412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.095426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.110503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.110518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.123617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.123632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.137905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.137921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.150947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.150962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.640 [2024-11-15 11:14:18.163735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.640 [2024-11-15 11:14:18.163749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.900 [2024-11-15 11:14:18.177768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.900 [2024-11-15 11:14:18.177782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.900 [2024-11-15 11:14:18.190757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.900 [2024-11-15 11:14:18.190772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.900 [2024-11-15 11:14:18.203408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.900 [2024-11-15 11:14:18.203423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.900 [2024-11-15 11:14:18.218011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.900 [2024-11-15 11:14:18.218026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.900 [2024-11-15 11:14:18.230994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.231009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 19039.00 IOPS, 148.74 MiB/s [2024-11-15T10:14:18.428Z] [2024-11-15 11:14:18.243886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.243901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.258113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.258128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.270910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.270925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.283753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.283768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.298339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.298353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.311957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.311971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.326276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.326291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.339376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.339390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.354104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.354123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.366997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.367012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.379620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.379635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.394666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.394681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.407820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.407834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.901 [2024-11-15 11:14:18.422066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.901 [2024-11-15 11:14:18.422081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.435206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.435221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.447737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.447752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.462197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.462211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.475009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.475025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.487887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.487902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.502233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.502248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.515189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.515204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.527836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.527850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.542118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.542133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.555072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.555087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.567814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.567829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.582418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.582433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.595228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.595243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.608187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.608206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.622220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.622234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.635105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.635119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.648169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.648183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.662189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.662203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.162 [2024-11-15 11:14:18.675406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.162 [2024-11-15 11:14:18.675421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.689602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.689617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.702587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.702602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.715686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.715700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.730777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.730792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.743823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.743837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.758952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.758967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.771943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.771958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.786614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.786628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.799818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.799832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.814551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.814571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.827157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.827171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.839599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.422 [2024-11-15 11:14:18.839613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.422 [2024-11-15 11:14:18.854492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.854507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.423 [2024-11-15 11:14:18.867899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.867917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.423 [2024-11-15 11:14:18.882420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.882435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.423 [2024-11-15 11:14:18.895391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.895405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.423 [2024-11-15 11:14:18.909424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.909438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.423 [2024-11-15 11:14:18.922917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.922932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.423 [2024-11-15 11:14:18.935572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.423 [2024-11-15 11:14:18.935587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:18.950232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:18.950248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:18.963347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:18.963361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:18.978201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:18.978215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:18.991321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:18.991335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.004079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.004094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.018681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.018696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.031672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.031686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.045922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.045936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.058881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.058896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.071710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.071724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.086226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.086241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.099133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.099147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.112340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.112354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.125907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.125925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.138894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.138908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.152019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.152033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.165926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.165941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.178958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.178973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.192282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.192296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.683 [2024-11-15 11:14:19.206066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.683 [2024-11-15 11:14:19.206080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.219018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.219032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.231746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.231761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 19089.50 IOPS, 149.14 MiB/s [2024-11-15T10:14:19.471Z] [2024-11-15 11:14:19.246204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.246218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.259398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.259412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.273740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.273754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.286703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.286717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.299607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.299620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.314487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.314502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.327630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.327644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.342092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.342107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.355285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.355300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.368158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.368172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.382186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.382201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.395144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.395159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.407699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.407713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.421846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.421860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.435014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.435028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.448011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.448025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.944 [2024-11-15 11:14:19.462071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.944 [2024-11-15 11:14:19.462086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.475208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.475223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.487923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.487937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.502102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.502116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.515196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.515210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.527994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.528008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.541978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.541992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.554668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.554682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.567553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.567571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.581737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.581751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.594593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.594608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.607597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.607611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.622210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.622225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.634870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.634884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.647546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.647560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.661850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.661864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.674879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.674894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.687502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.687516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.701940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.701954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.714935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.714950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.205 [2024-11-15 11:14:19.728310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.205 [2024-11-15 11:14:19.728324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.742381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.742395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.755287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.755302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.768151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.768166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.782744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.782759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.795684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.795698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.810363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.810378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.823284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.823299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.836196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.836211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.850264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.850279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.863464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.863478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.878589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.878604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.891701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.891716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.906058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.906073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.919176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.919191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.931956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.931970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.946328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.946343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.959457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.959471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.974259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.974274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.465 [2024-11-15 11:14:19.987516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.465 [2024-11-15 11:14:19.987530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.002999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.003018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.016186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.016201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.030322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.030337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.043569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.043584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.057955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.057970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.071033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.071047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.083959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.083973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.097840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.097855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.110992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.111006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.123976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.123991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.138481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.138501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.151875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.151889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.166367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.166382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.179559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.179578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.194718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.194733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.207770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.207784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.221622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.221636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 [2024-11-15 11:14:20.234586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.234601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.726 19080.67 IOPS, 149.07 MiB/s [2024-11-15T10:14:20.253Z] [2024-11-15 11:14:20.247877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.726 [2024-11-15 11:14:20.247891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.262374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.262389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.275697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.275710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.291054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.291069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.304312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.304327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.318505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.318520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.331614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.331628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.346004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.346019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.359077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.359093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.372386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.372401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.986 [2024-11-15 11:14:20.386686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.986 [2024-11-15 11:14:20.386700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.399660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.399678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.413948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.413963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.427055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.427069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.439823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.439838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.454627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.454642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.468057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.468072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.481987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.482002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.494959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.494974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.987 [2024-11-15 11:14:20.508425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.987 [2024-11-15 11:14:20.508440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.522223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.522237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.535315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.535330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.547959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.547973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.562499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.562514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.575530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.575545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.589539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.589554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.602629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.602643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.616201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.616216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.630824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.630839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.643742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.643757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.657600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.657619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.671297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.247 [2024-11-15 11:14:20.671312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.247 [2024-11-15 11:14:20.683328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.683342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.248 [2024-11-15 11:14:20.698153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.698167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.248 [2024-11-15 11:14:20.710865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.710879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.248 [2024-11-15 11:14:20.723699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.723714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.248 [2024-11-15 11:14:20.738001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.738015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.248 [2024-11-15 11:14:20.751094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.751107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.248 [2024-11-15 11:14:20.763924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.248 [2024-11-15 11:14:20.763938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.778265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.778280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.791302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.791317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.804150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.804164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.817796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.817810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.830853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.830868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.843779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.843794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.858118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.858133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.870913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.870928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.884043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.884057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.898766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.898781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.912037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.912051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.926694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.926708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.939825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.939839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.954440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.954455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.967510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.967525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.982212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.982227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:20.995316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:20.995330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:21.008142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:21.008156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.508 [2024-11-15 11:14:21.022441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.508 [2024-11-15 11:14:21.022455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.035394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.035408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.050327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.050341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.063684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.063698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.078066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.078080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.091285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.091299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.104098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.104113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.118343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.118357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.131503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.131517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.145868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.145882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.158862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.158876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.171861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.171875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.185870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.185884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.199005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.199020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.212359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.212373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.226060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.226075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 19087.00 IOPS, 149.12 MiB/s [2024-11-15T10:14:21.296Z] [2024-11-15 11:14:21.239266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.239281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.252236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.252250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.266275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.266289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.279492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.279506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.769 [2024-11-15 11:14:21.294165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.769 [2024-11-15 11:14:21.294179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.307403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.307417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.322386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.322400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.335579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.335593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.350187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.350202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.363239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.363253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.375707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.375721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.389788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.389802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.402821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.402836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.416097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.416118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.430162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.430176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.442974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.442988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.455709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.455722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.470137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.470151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.483385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.483398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.498140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.498155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.510959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.510973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.523642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.523656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.538135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.538149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.029 [2024-11-15 11:14:21.550982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.029 [2024-11-15 11:14:21.550997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.564384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.564398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.578903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.578918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.591916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.591930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.606293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.606308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.619399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.619413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.633946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.633960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.646820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.646834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.659820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.659835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.674225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.674243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.687323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.687338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.699523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.699537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.714034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.714048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.726993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.727007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.739929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.739944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.753797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.753812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.766767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.766782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.780299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.780314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.794415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.794430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.290 [2024-11-15 11:14:21.807189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.290 [2024-11-15 11:14:21.807204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.820047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.820062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.834374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.834388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.847526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.847540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.862770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.862786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.875822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.875837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.890378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.890393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.903610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.903625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.917997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.918012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.931126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.931145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.944237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.944252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.958317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.958331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.971667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.971682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.986365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.986380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:21.999287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:21.999302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:22.012100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:22.012114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:22.026956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:22.026971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:22.039841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:22.039855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:22.054680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:22.054694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.551 [2024-11-15 11:14:22.067908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.551 [2024-11-15 11:14:22.067922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.081818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.081835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.094815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.094830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.107701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.107716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.122063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.122078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.135548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.135569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.150184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.150198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.163192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.163206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.813 [2024-11-15 11:14:22.176269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.813 [2024-11-15 11:14:22.176283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.189930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.189948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.203049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.203064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.215294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.215309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.228214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.228228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 19087.60 IOPS, 149.12 MiB/s [2024-11-15T10:14:22.341Z] [2024-11-15 11:14:22.242037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.242052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 00:34:02.814 Latency(us) 00:34:02.814 [2024-11-15T10:14:22.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.814 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:02.814 Nvme1n1 : 5.01 19089.20 149.13 0.00 0.00 6699.38 2662.40 11195.73 00:34:02.814 [2024-11-15T10:14:22.341Z] =================================================================================================================== 00:34:02.814 [2024-11-15T10:14:22.341Z] Total : 19089.20 149.13 0.00 0.00 6699.38 2662.40 11195.73 00:34:02.814 [2024-11-15 11:14:22.251072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.251087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.263071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.263085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.275075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.275088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.287071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.287084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.299069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.299080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.311066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.311076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.323066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.323074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.814 [2024-11-15 11:14:22.335069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.814 [2024-11-15 11:14:22.335078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.075 [2024-11-15 11:14:22.347065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.075 [2024-11-15 11:14:22.347074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (652370) - No such process 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 652370 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:03.075 delay0 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.075 11:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:03.075 [2024-11-15 11:14:22.470147] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:09.662 [2024-11-15 11:14:28.669519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145ef60 is same with the state(6) to be set 00:34:09.662 Initializing NVMe Controllers 00:34:09.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:09.662 Initialization complete. Launching workers. 00:34:09.662 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 244 00:34:09.662 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 522, failed to submit 42 00:34:09.662 success 331, unsuccessful 191, failed 0 00:34:09.662 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:09.662 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:09.662 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.662 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.663 rmmod nvme_tcp 00:34:09.663 rmmod nvme_fabrics 00:34:09.663 rmmod nvme_keyring 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 650210 ']' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 650210 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 650210 ']' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 650210 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 650210 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 650210' 00:34:09.663 killing process with pid 650210 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 650210 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 650210 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.663 11:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.576 00:34:11.576 real 0m33.636s 00:34:11.576 user 0m42.904s 00:34:11.576 sys 0m12.103s 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.576 ************************************ 00:34:11.576 END TEST nvmf_zcopy 00:34:11.576 ************************************ 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:11.576 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:11.576 ************************************ 00:34:11.576 START TEST nvmf_nmic 00:34:11.839 ************************************ 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:11.839 * Looking for test storage... 00:34:11.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.839 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.840 --rc genhtml_branch_coverage=1 00:34:11.840 --rc genhtml_function_coverage=1 00:34:11.840 --rc genhtml_legend=1 00:34:11.840 --rc geninfo_all_blocks=1 00:34:11.840 --rc geninfo_unexecuted_blocks=1 00:34:11.840 00:34:11.840 ' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.840 --rc genhtml_branch_coverage=1 00:34:11.840 --rc genhtml_function_coverage=1 00:34:11.840 --rc genhtml_legend=1 00:34:11.840 --rc geninfo_all_blocks=1 00:34:11.840 --rc geninfo_unexecuted_blocks=1 00:34:11.840 00:34:11.840 ' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.840 --rc genhtml_branch_coverage=1 00:34:11.840 --rc genhtml_function_coverage=1 00:34:11.840 --rc genhtml_legend=1 00:34:11.840 --rc geninfo_all_blocks=1 00:34:11.840 --rc geninfo_unexecuted_blocks=1 00:34:11.840 00:34:11.840 ' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.840 --rc genhtml_branch_coverage=1 00:34:11.840 --rc genhtml_function_coverage=1 00:34:11.840 --rc genhtml_legend=1 00:34:11.840 --rc geninfo_all_blocks=1 00:34:11.840 --rc geninfo_unexecuted_blocks=1 00:34:11.840 00:34:11.840 ' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.840 11:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.000 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:20.001 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:20.001 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:20.001 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:20.001 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:34:20.001 00:34:20.001 --- 10.0.0.2 ping statistics --- 00:34:20.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.001 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:34:20.001 00:34:20.001 --- 10.0.0.1 ping statistics --- 00:34:20.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.001 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.001 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=658904 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 658904 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 658904 ']' 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:20.002 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.002 [2024-11-15 11:14:38.913518] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:20.002 [2024-11-15 11:14:38.914684] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:34:20.002 [2024-11-15 11:14:38.914736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.002 [2024-11-15 11:14:39.014021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.002 [2024-11-15 11:14:39.068029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.002 [2024-11-15 11:14:39.068083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.002 [2024-11-15 11:14:39.068092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.002 [2024-11-15 11:14:39.068099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.002 [2024-11-15 11:14:39.068105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.002 [2024-11-15 11:14:39.070208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.002 [2024-11-15 11:14:39.070369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.002 [2024-11-15 11:14:39.070505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.002 [2024-11-15 11:14:39.070505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.002 [2024-11-15 11:14:39.149282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:20.002 [2024-11-15 11:14:39.150353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:20.002 [2024-11-15 11:14:39.150580] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:20.002 [2024-11-15 11:14:39.151167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:20.002 [2024-11-15 11:14:39.151208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.263 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.263 [2024-11-15 11:14:39.775361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 Malloc0 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 [2024-11-15 11:14:39.871748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:20.525 test case1: single bdev can't be used in multiple subsystems 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 [2024-11-15 11:14:39.906986] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:20.525 [2024-11-15 11:14:39.907012] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:20.525 [2024-11-15 11:14:39.907021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.525 request: 00:34:20.525 { 00:34:20.525 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:20.525 "namespace": { 00:34:20.525 "bdev_name": "Malloc0", 00:34:20.525 "no_auto_visible": false, 00:34:20.525 "no_metadata": false 00:34:20.525 }, 00:34:20.525 "method": "nvmf_subsystem_add_ns", 00:34:20.525 "req_id": 1 00:34:20.525 } 00:34:20.525 Got JSON-RPC error response 00:34:20.525 response: 00:34:20.525 { 00:34:20.525 "code": -32602, 00:34:20.525 "message": "Invalid parameters" 00:34:20.525 } 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:20.525 Adding namespace failed - expected result. 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:20.525 test case2: host connect to nvmf target in multiple paths 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:20.525 [2024-11-15 11:14:39.919136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.525 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:21.097 11:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:21.358 11:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:21.358 11:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:21.358 11:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:21.358 11:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:21.358 11:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:23.270 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:23.270 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:23.270 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:23.552 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:23.552 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:23.552 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:23.552 11:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:23.552 [global] 00:34:23.552 thread=1 00:34:23.552 invalidate=1 00:34:23.552 rw=write 00:34:23.552 time_based=1 00:34:23.552 runtime=1 00:34:23.552 ioengine=libaio 00:34:23.552 direct=1 00:34:23.552 bs=4096 00:34:23.552 iodepth=1 00:34:23.552 norandommap=0 00:34:23.552 numjobs=1 00:34:23.552 00:34:23.552 verify_dump=1 00:34:23.552 verify_backlog=512 00:34:23.552 verify_state_save=0 00:34:23.552 do_verify=1 00:34:23.552 verify=crc32c-intel 00:34:23.552 [job0] 00:34:23.552 filename=/dev/nvme0n1 00:34:23.552 Could not set queue depth (nvme0n1) 00:34:23.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.821 fio-3.35 00:34:23.821 Starting 1 thread 00:34:25.208 00:34:25.208 job0: (groupid=0, jobs=1): err= 0: pid=659777: Fri Nov 15 11:14:44 2024 00:34:25.208 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:34:25.208 slat (nsec): min=9312, max=28021, avg=26032.28, stdev=4186.75 00:34:25.208 clat (usec): min=851, max=42996, avg=39479.78, stdev=9660.27 00:34:25.208 lat (usec): min=860, max=43023, avg=39505.81, stdev=9664.43 00:34:25.208 clat percentiles (usec): 00:34:25.208 | 1.00th=[ 848], 5.00th=[ 848], 10.00th=[41157], 20.00th=[41157], 00:34:25.208 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:25.208 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:34:25.208 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:25.208 | 99.99th=[43254] 00:34:25.208 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:34:25.208 slat (usec): min=9, max=26726, avg=82.77, stdev=1179.86 00:34:25.208 clat (usec): min=221, max=761, avg=540.97, stdev=102.66 00:34:25.208 lat (usec): min=232, max=27453, avg=623.74, stdev=1192.87 00:34:25.208 clat percentiles (usec): 00:34:25.208 | 1.00th=[ 289], 5.00th=[ 343], 10.00th=[ 400], 20.00th=[ 461], 00:34:25.208 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 570], 00:34:25.208 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 693], 00:34:25.208 | 99.00th=[ 734], 99.50th=[ 742], 99.90th=[ 758], 99.95th=[ 758], 00:34:25.208 | 99.99th=[ 758] 00:34:25.208 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:25.208 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:25.208 lat (usec) : 250=0.75%, 500=27.55%, 750=68.11%, 1000=0.38% 00:34:25.208 lat (msec) : 50=3.21% 00:34:25.208 cpu : usr=0.87%, sys=2.03%, ctx=535, majf=0, minf=1 00:34:25.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.208 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.208 00:34:25.208 Run status group 0 (all jobs): 00:34:25.208 READ: bw=69.5KiB/s (71.2kB/s), 69.5KiB/s-69.5KiB/s (71.2kB/s-71.2kB/s), io=72.0KiB (73.7kB), run=1036-1036msec 00:34:25.208 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:34:25.208 00:34:25.208 Disk stats (read/write): 00:34:25.208 nvme0n1: ios=39/512, merge=0/0, ticks=1498/216, in_queue=1714, util=98.60% 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:25.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:25.208 rmmod nvme_tcp 00:34:25.208 rmmod nvme_fabrics 00:34:25.208 rmmod nvme_keyring 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 658904 ']' 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 658904 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 658904 ']' 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 658904 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658904 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658904' 00:34:25.208 killing process with pid 658904 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 658904 00:34:25.208 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 658904 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.468 11:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.382 00:34:27.382 real 0m15.742s 00:34:27.382 user 0m35.712s 00:34:27.382 sys 0m7.277s 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.382 ************************************ 00:34:27.382 END TEST nvmf_nmic 00:34:27.382 ************************************ 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:27.382 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:27.645 ************************************ 00:34:27.645 START TEST nvmf_fio_target 00:34:27.645 ************************************ 00:34:27.645 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:27.645 * Looking for test storage... 00:34:27.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:27.645 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.646 --rc genhtml_branch_coverage=1 00:34:27.646 --rc genhtml_function_coverage=1 00:34:27.646 --rc genhtml_legend=1 00:34:27.646 --rc geninfo_all_blocks=1 00:34:27.646 --rc geninfo_unexecuted_blocks=1 00:34:27.646 00:34:27.646 ' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.646 --rc genhtml_branch_coverage=1 00:34:27.646 --rc genhtml_function_coverage=1 00:34:27.646 --rc genhtml_legend=1 00:34:27.646 --rc geninfo_all_blocks=1 00:34:27.646 --rc geninfo_unexecuted_blocks=1 00:34:27.646 00:34:27.646 ' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.646 --rc genhtml_branch_coverage=1 00:34:27.646 --rc genhtml_function_coverage=1 00:34:27.646 --rc genhtml_legend=1 00:34:27.646 --rc geninfo_all_blocks=1 00:34:27.646 --rc geninfo_unexecuted_blocks=1 00:34:27.646 00:34:27.646 ' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.646 --rc genhtml_branch_coverage=1 00:34:27.646 --rc genhtml_function_coverage=1 00:34:27.646 --rc genhtml_legend=1 00:34:27.646 --rc geninfo_all_blocks=1 00:34:27.646 --rc geninfo_unexecuted_blocks=1 00:34:27.646 00:34:27.646 ' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.646 11:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.795 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:35.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:35.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:35.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:35.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.796 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:34:35.797 00:34:35.797 --- 10.0.0.2 ping statistics --- 00:34:35.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.797 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:34:35.797 00:34:35.797 --- 10.0.0.1 ping statistics --- 00:34:35.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.797 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=664204 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 664204 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 664204 ']' 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:35.797 11:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.797 [2024-11-15 11:14:54.715727] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.797 [2024-11-15 11:14:54.717194] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:34:35.797 [2024-11-15 11:14:54.717261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.797 [2024-11-15 11:14:54.817207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.797 [2024-11-15 11:14:54.870225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.797 [2024-11-15 11:14:54.870279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.797 [2024-11-15 11:14:54.870287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.797 [2024-11-15 11:14:54.870296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.797 [2024-11-15 11:14:54.870302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.797 [2024-11-15 11:14:54.872682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.797 [2024-11-15 11:14:54.872859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.797 [2024-11-15 11:14:54.873017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.797 [2024-11-15 11:14:54.873017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.797 [2024-11-15 11:14:54.952232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.797 [2024-11-15 11:14:54.953233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.797 [2024-11-15 11:14:54.953477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:35.797 [2024-11-15 11:14:54.954016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:35.797 [2024-11-15 11:14:54.954025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:36.059 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:36.059 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:36.060 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:36.060 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:36.060 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.060 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.060 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:36.321 [2024-11-15 11:14:55.730031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.321 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.582 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:36.582 11:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.843 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:36.843 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.104 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:37.104 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.104 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:37.104 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:37.365 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.626 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:37.626 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.888 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:37.888 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.888 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:37.888 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:38.149 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:38.409 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:38.409 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.670 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:38.670 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:38.670 11:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.932 [2024-11-15 11:14:58.313988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.932 11:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:39.194 11:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:39.456 11:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:39.718 11:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:39.718 11:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:39.718 11:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:39.718 11:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:39.718 11:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:39.718 11:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:41.634 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:41.634 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:41.634 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:41.980 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:41.980 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:41.980 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:41.980 11:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:41.980 [global] 00:34:41.980 thread=1 00:34:41.980 invalidate=1 00:34:41.980 rw=write 00:34:41.980 time_based=1 00:34:41.980 runtime=1 00:34:41.980 ioengine=libaio 00:34:41.980 direct=1 00:34:41.980 bs=4096 00:34:41.980 iodepth=1 00:34:41.980 norandommap=0 00:34:41.980 numjobs=1 00:34:41.980 00:34:41.980 verify_dump=1 00:34:41.980 verify_backlog=512 00:34:41.980 verify_state_save=0 00:34:41.980 do_verify=1 00:34:41.980 verify=crc32c-intel 00:34:41.980 [job0] 00:34:41.980 filename=/dev/nvme0n1 00:34:41.980 [job1] 00:34:41.980 filename=/dev/nvme0n2 00:34:41.980 [job2] 00:34:41.980 filename=/dev/nvme0n3 00:34:41.980 [job3] 00:34:41.980 filename=/dev/nvme0n4 00:34:41.980 Could not set queue depth (nvme0n1) 00:34:41.980 Could not set queue depth (nvme0n2) 00:34:41.980 Could not set queue depth (nvme0n3) 00:34:41.980 Could not set queue depth (nvme0n4) 00:34:42.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.242 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.242 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.242 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.242 fio-3.35 00:34:42.242 Starting 4 threads 00:34:43.629 00:34:43.629 job0: (groupid=0, jobs=1): err= 0: pid=665790: Fri Nov 15 11:15:02 2024 00:34:43.629 read: IOPS=16, BW=65.8KiB/s (67.3kB/s)(68.0KiB/1034msec) 00:34:43.629 slat (nsec): min=26152, max=26943, avg=26620.88, stdev=229.24 00:34:43.629 clat (usec): min=1090, max=42009, avg=39528.13, stdev=9905.56 00:34:43.629 lat (usec): min=1117, max=42035, avg=39554.75, stdev=9905.48 00:34:43.629 clat percentiles (usec): 00:34:43.629 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:34:43.629 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:43.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:43.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:43.629 | 99.99th=[42206] 00:34:43.629 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:34:43.629 slat (nsec): min=10062, max=71398, avg=32408.42, stdev=9599.15 00:34:43.629 clat (usec): min=282, max=1059, avg=660.24, stdev=138.66 00:34:43.629 lat (usec): min=295, max=1112, avg=692.65, stdev=142.52 00:34:43.629 clat percentiles (usec): 00:34:43.629 | 1.00th=[ 347], 5.00th=[ 416], 10.00th=[ 486], 20.00th=[ 523], 00:34:43.629 | 30.00th=[ 586], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 701], 00:34:43.629 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 881], 00:34:43.629 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:43.629 | 99.99th=[ 1057] 00:34:43.629 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.629 lat (usec) : 500=13.04%, 750=54.82%, 1000=28.17% 00:34:43.629 lat (msec) : 2=0.95%, 50=3.02% 00:34:43.629 cpu : usr=0.97%, sys=1.36%, ctx=532, majf=0, minf=1 00:34:43.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.629 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.629 job1: (groupid=0, jobs=1): err= 0: pid=665794: Fri Nov 15 11:15:02 2024 00:34:43.629 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:34:43.629 slat (nsec): min=24211, max=25398, avg=24950.35, stdev=266.10 00:34:43.629 clat (usec): min=1203, max=42068, avg=39532.02, stdev=9878.02 00:34:43.629 lat (usec): min=1228, max=42093, avg=39556.97, stdev=9878.00 00:34:43.629 clat percentiles (usec): 00:34:43.629 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[41681], 00:34:43.629 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:43.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:43.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:43.629 | 99.99th=[42206] 00:34:43.629 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:43.629 slat (nsec): min=9320, max=63983, avg=29984.16, stdev=8630.85 00:34:43.629 clat (usec): min=207, max=1158, avg=623.86, stdev=147.54 00:34:43.629 lat (usec): min=218, max=1191, avg=653.84, stdev=150.40 00:34:43.629 clat percentiles (usec): 00:34:43.629 | 1.00th=[ 281], 5.00th=[ 375], 10.00th=[ 433], 20.00th=[ 506], 00:34:43.629 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:34:43.629 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 799], 95.00th=[ 865], 00:34:43.629 | 99.00th=[ 996], 99.50th=[ 1037], 99.90th=[ 1156], 99.95th=[ 1156], 00:34:43.629 | 99.99th=[ 1156] 00:34:43.629 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.629 lat (usec) : 250=0.38%, 500=18.34%, 750=62.76%, 1000=14.37% 00:34:43.629 lat (msec) : 2=1.13%, 50=3.02% 00:34:43.629 cpu : usr=1.09%, sys=1.19%, ctx=529, majf=0, minf=2 00:34:43.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.629 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.629 job2: (groupid=0, jobs=1): err= 0: pid=665808: Fri Nov 15 11:15:02 2024 00:34:43.629 read: IOPS=16, BW=66.2KiB/s (67.8kB/s)(68.0KiB/1027msec) 00:34:43.629 slat (nsec): min=26693, max=31268, avg=27253.88, stdev=1054.55 00:34:43.629 clat (usec): min=1061, max=42011, avg=39494.44, stdev=9906.19 00:34:43.629 lat (usec): min=1092, max=42038, avg=39521.69, stdev=9905.16 00:34:43.629 clat percentiles (usec): 00:34:43.629 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:34:43.629 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:43.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:43.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:43.629 | 99.99th=[42206] 00:34:43.629 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:34:43.630 slat (nsec): min=10202, max=55886, avg=33047.78, stdev=8934.86 00:34:43.630 clat (usec): min=172, max=1033, avg=646.68, stdev=172.24 00:34:43.630 lat (usec): min=191, max=1068, avg=679.73, stdev=174.80 00:34:43.630 clat percentiles (usec): 00:34:43.630 | 1.00th=[ 196], 5.00th=[ 330], 10.00th=[ 400], 20.00th=[ 494], 00:34:43.630 | 30.00th=[ 570], 40.00th=[ 627], 50.00th=[ 676], 60.00th=[ 709], 00:34:43.630 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 848], 95.00th=[ 906], 00:34:43.630 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:34:43.630 | 99.99th=[ 1037] 00:34:43.630 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.630 lat (usec) : 250=2.08%, 500=18.15%, 750=47.64%, 1000=28.17% 00:34:43.630 lat (msec) : 2=0.95%, 50=3.02% 00:34:43.630 cpu : usr=0.58%, sys=1.85%, ctx=530, majf=0, minf=1 00:34:43.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.630 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.630 job3: (groupid=0, jobs=1): err= 0: pid=665819: Fri Nov 15 11:15:02 2024 00:34:43.630 read: IOPS=661, BW=2645KiB/s (2709kB/s)(2648KiB/1001msec) 00:34:43.630 slat (nsec): min=7317, max=60342, avg=25614.29, stdev=6696.14 00:34:43.630 clat (usec): min=362, max=1226, avg=756.95, stdev=129.07 00:34:43.630 lat (usec): min=370, max=1253, avg=782.57, stdev=130.19 00:34:43.630 clat percentiles (usec): 00:34:43.630 | 1.00th=[ 424], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 635], 00:34:43.630 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 775], 60.00th=[ 807], 00:34:43.630 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 906], 95.00th=[ 947], 00:34:43.630 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1221], 99.95th=[ 1221], 00:34:43.630 | 99.99th=[ 1221] 00:34:43.630 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:43.630 slat (nsec): min=9746, max=71428, avg=31684.76, stdev=10308.87 00:34:43.630 clat (usec): min=119, max=973, avg=423.44, stdev=143.10 00:34:43.630 lat (usec): min=129, max=1009, avg=455.12, stdev=147.03 00:34:43.630 clat percentiles (usec): 00:34:43.630 | 1.00th=[ 133], 5.00th=[ 217], 10.00th=[ 260], 20.00th=[ 302], 00:34:43.630 | 30.00th=[ 326], 40.00th=[ 375], 50.00th=[ 408], 60.00th=[ 437], 00:34:43.630 | 70.00th=[ 506], 80.00th=[ 553], 90.00th=[ 627], 95.00th=[ 685], 00:34:43.630 | 99.00th=[ 775], 99.50th=[ 832], 99.90th=[ 865], 99.95th=[ 971], 00:34:43.630 | 99.99th=[ 971] 00:34:43.630 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.630 lat (usec) : 250=4.98%, 500=38.08%, 750=34.10%, 1000=21.95% 00:34:43.630 lat (msec) : 2=0.89% 00:34:43.630 cpu : usr=2.60%, sys=4.90%, ctx=1690, majf=0, minf=1 00:34:43.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.630 issued rwts: total=662,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.630 00:34:43.630 Run status group 0 (all jobs): 00:34:43.630 READ: bw=2758KiB/s (2824kB/s), 65.8KiB/s-2645KiB/s (67.3kB/s-2709kB/s), io=2852KiB (2920kB), run=1001-1034msec 00:34:43.630 WRITE: bw=9903KiB/s (10.1MB/s), 1981KiB/s-4092KiB/s (2028kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1034msec 00:34:43.630 00:34:43.630 Disk stats (read/write): 00:34:43.630 nvme0n1: ios=71/512, merge=0/0, ticks=1175/320, in_queue=1495, util=86.93% 00:34:43.630 nvme0n2: ios=61/512, merge=0/0, ticks=524/311, in_queue=835, util=84.38% 00:34:43.630 nvme0n3: ios=75/512, merge=0/0, ticks=1090/307, in_queue=1397, util=96.40% 00:34:43.630 nvme0n4: ios=540/761, merge=0/0, ticks=1218/315, in_queue=1533, util=98.62% 00:34:43.630 11:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:43.630 [global] 00:34:43.630 thread=1 00:34:43.630 invalidate=1 00:34:43.630 rw=randwrite 00:34:43.630 time_based=1 00:34:43.630 runtime=1 00:34:43.630 ioengine=libaio 00:34:43.630 direct=1 00:34:43.630 bs=4096 00:34:43.630 iodepth=1 00:34:43.630 norandommap=0 00:34:43.630 numjobs=1 00:34:43.630 00:34:43.630 verify_dump=1 00:34:43.630 verify_backlog=512 00:34:43.630 verify_state_save=0 00:34:43.630 do_verify=1 00:34:43.630 verify=crc32c-intel 00:34:43.630 [job0] 00:34:43.630 filename=/dev/nvme0n1 00:34:43.630 [job1] 00:34:43.630 filename=/dev/nvme0n2 00:34:43.630 [job2] 00:34:43.630 filename=/dev/nvme0n3 00:34:43.630 [job3] 00:34:43.630 filename=/dev/nvme0n4 00:34:43.630 Could not set queue depth (nvme0n1) 00:34:43.630 Could not set queue depth (nvme0n2) 00:34:43.630 Could not set queue depth (nvme0n3) 00:34:43.630 Could not set queue depth (nvme0n4) 00:34:43.891 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.891 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.891 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.891 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.891 fio-3.35 00:34:43.891 Starting 4 threads 00:34:45.276 00:34:45.276 job0: (groupid=0, jobs=1): err= 0: pid=666331: Fri Nov 15 11:15:04 2024 00:34:45.276 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:45.276 slat (nsec): min=6723, max=45893, avg=26252.28, stdev=4151.08 00:34:45.276 clat (usec): min=537, max=1316, avg=1025.15, stdev=121.30 00:34:45.276 lat (usec): min=564, max=1342, avg=1051.40, stdev=121.99 00:34:45.276 clat percentiles (usec): 00:34:45.276 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 947], 00:34:45.276 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:34:45.276 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:45.276 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:45.276 | 99.99th=[ 1319] 00:34:45.276 write: IOPS=695, BW=2781KiB/s (2848kB/s)(2784KiB/1001msec); 0 zone resets 00:34:45.276 slat (nsec): min=9716, max=68474, avg=30107.82, stdev=9071.00 00:34:45.276 clat (usec): min=223, max=1308, avg=619.38, stdev=126.73 00:34:45.276 lat (usec): min=257, max=1342, avg=649.48, stdev=130.01 00:34:45.276 clat percentiles (usec): 00:34:45.276 | 1.00th=[ 293], 5.00th=[ 396], 10.00th=[ 465], 20.00th=[ 523], 00:34:45.276 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:34:45.276 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:34:45.276 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:45.276 | 99.99th=[ 1303] 00:34:45.276 bw ( KiB/s): min= 4096, max= 4096, per=35.30%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.276 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.276 lat (usec) : 250=0.17%, 500=8.86%, 750=42.38%, 1000=19.21% 00:34:45.276 lat (msec) : 2=29.39% 00:34:45.276 cpu : usr=2.00%, sys=3.40%, ctx=1209, majf=0, minf=1 00:34:45.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.276 issued rwts: total=512,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.276 job1: (groupid=0, jobs=1): err= 0: pid=666332: Fri Nov 15 11:15:04 2024 00:34:45.276 read: IOPS=621, BW=2486KiB/s (2545kB/s)(2488KiB/1001msec) 00:34:45.276 slat (nsec): min=6351, max=64194, avg=23176.10, stdev=8013.17 00:34:45.276 clat (usec): min=262, max=1002, avg=700.06, stdev=103.99 00:34:45.276 lat (usec): min=271, max=1028, avg=723.23, stdev=106.59 00:34:45.276 clat percentiles (usec): 00:34:45.276 | 1.00th=[ 400], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 619], 00:34:45.276 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:34:45.276 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 848], 00:34:45.276 | 99.00th=[ 914], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:45.276 | 99.99th=[ 1004] 00:34:45.276 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:45.276 slat (nsec): min=8716, max=52271, avg=29526.89, stdev=8359.64 00:34:45.276 clat (usec): min=130, max=811, avg=495.83, stdev=114.17 00:34:45.276 lat (usec): min=139, max=842, avg=525.36, stdev=117.61 00:34:45.276 clat percentiles (usec): 00:34:45.276 | 1.00th=[ 235], 5.00th=[ 302], 10.00th=[ 351], 20.00th=[ 400], 00:34:45.276 | 30.00th=[ 441], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 529], 00:34:45.276 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 676], 00:34:45.276 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 783], 99.95th=[ 807], 00:34:45.276 | 99.99th=[ 807] 00:34:45.276 bw ( KiB/s): min= 4096, max= 4096, per=35.30%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.276 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.276 lat (usec) : 250=0.85%, 500=32.50%, 750=53.16%, 1000=13.43% 00:34:45.276 lat (msec) : 2=0.06% 00:34:45.276 cpu : usr=3.90%, sys=5.40%, ctx=1646, majf=0, minf=1 00:34:45.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.276 issued rwts: total=622,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.276 job2: (groupid=0, jobs=1): err= 0: pid=666334: Fri Nov 15 11:15:04 2024 00:34:45.276 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:34:45.276 slat (nsec): min=28169, max=29304, avg=28577.41, stdev=248.87 00:34:45.276 clat (usec): min=40803, max=42066, avg=41315.10, stdev=458.16 00:34:45.276 lat (usec): min=40831, max=42095, avg=41343.67, stdev=458.14 00:34:45.276 clat percentiles (usec): 00:34:45.276 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:45.276 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:45.276 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:45.276 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:45.276 | 99.99th=[42206] 00:34:45.276 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:45.276 slat (nsec): min=9478, max=72515, avg=34231.26, stdev=8617.70 00:34:45.276 clat (usec): min=162, max=938, avg=557.08, stdev=141.95 00:34:45.277 lat (usec): min=172, max=973, avg=591.31, stdev=144.25 00:34:45.277 clat percentiles (usec): 00:34:45.277 | 1.00th=[ 231], 5.00th=[ 326], 10.00th=[ 371], 20.00th=[ 441], 00:34:45.277 | 30.00th=[ 486], 40.00th=[ 515], 50.00th=[ 562], 60.00th=[ 594], 00:34:45.277 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 799], 00:34:45.277 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 938], 00:34:45.277 | 99.99th=[ 938] 00:34:45.277 bw ( KiB/s): min= 4096, max= 4096, per=35.30%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.277 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.277 lat (usec) : 250=1.51%, 500=33.65%, 750=53.12%, 1000=8.51% 00:34:45.277 lat (msec) : 50=3.21% 00:34:45.277 cpu : usr=1.29%, sys=2.08%, ctx=530, majf=0, minf=1 00:34:45.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.277 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.277 job3: (groupid=0, jobs=1): err= 0: pid=666335: Fri Nov 15 11:15:04 2024 00:34:45.277 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:45.277 slat (nsec): min=7932, max=46509, avg=25967.34, stdev=3451.26 00:34:45.277 clat (usec): min=524, max=1456, avg=1060.32, stdev=148.10 00:34:45.277 lat (usec): min=551, max=1482, avg=1086.28, stdev=148.22 00:34:45.277 clat percentiles (usec): 00:34:45.277 | 1.00th=[ 676], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 947], 00:34:45.277 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1106], 00:34:45.277 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1221], 95.00th=[ 1319], 00:34:45.277 | 99.00th=[ 1385], 99.50th=[ 1434], 99.90th=[ 1450], 99.95th=[ 1450], 00:34:45.277 | 99.99th=[ 1450] 00:34:45.277 write: IOPS=700, BW=2801KiB/s (2868kB/s)(2804KiB/1001msec); 0 zone resets 00:34:45.277 slat (nsec): min=9376, max=53521, avg=29812.65, stdev=8269.26 00:34:45.277 clat (usec): min=137, max=1002, avg=589.64, stdev=142.54 00:34:45.277 lat (usec): min=150, max=1034, avg=619.45, stdev=144.29 00:34:45.277 clat percentiles (usec): 00:34:45.277 | 1.00th=[ 258], 5.00th=[ 338], 10.00th=[ 404], 20.00th=[ 469], 00:34:45.277 | 30.00th=[ 506], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 635], 00:34:45.277 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 807], 00:34:45.277 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:45.277 | 99.99th=[ 1004] 00:34:45.277 bw ( KiB/s): min= 4096, max= 4096, per=35.30%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.277 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.277 lat (usec) : 250=0.49%, 500=15.83%, 750=34.71%, 1000=19.04% 00:34:45.277 lat (msec) : 2=29.93% 00:34:45.277 cpu : usr=1.80%, sys=3.60%, ctx=1213, majf=0, minf=1 00:34:45.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.277 issued rwts: total=512,701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.277 00:34:45.277 Run status group 0 (all jobs): 00:34:45.277 READ: bw=6580KiB/s (6738kB/s), 67.3KiB/s-2486KiB/s (68.9kB/s-2545kB/s), io=6652KiB (6812kB), run=1001-1011msec 00:34:45.277 WRITE: bw=11.3MiB/s (11.9MB/s), 2026KiB/s-4092KiB/s (2074kB/s-4190kB/s), io=11.5MiB (12.0MB), run=1001-1011msec 00:34:45.277 00:34:45.277 Disk stats (read/write): 00:34:45.277 nvme0n1: ios=507/512, merge=0/0, ticks=678/310, in_queue=988, util=98.20% 00:34:45.277 nvme0n2: ios=561/869, merge=0/0, ticks=380/338, in_queue=718, util=89.49% 00:34:45.277 nvme0n3: ios=60/512, merge=0/0, ticks=959/200, in_queue=1159, util=96.09% 00:34:45.277 nvme0n4: ios=531/512, merge=0/0, ticks=607/277, in_queue=884, util=96.15% 00:34:45.277 11:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:45.277 [global] 00:34:45.277 thread=1 00:34:45.277 invalidate=1 00:34:45.277 rw=write 00:34:45.277 time_based=1 00:34:45.277 runtime=1 00:34:45.277 ioengine=libaio 00:34:45.277 direct=1 00:34:45.277 bs=4096 00:34:45.277 iodepth=128 00:34:45.277 norandommap=0 00:34:45.277 numjobs=1 00:34:45.277 00:34:45.277 verify_dump=1 00:34:45.277 verify_backlog=512 00:34:45.277 verify_state_save=0 00:34:45.277 do_verify=1 00:34:45.277 verify=crc32c-intel 00:34:45.277 [job0] 00:34:45.277 filename=/dev/nvme0n1 00:34:45.277 [job1] 00:34:45.277 filename=/dev/nvme0n2 00:34:45.277 [job2] 00:34:45.277 filename=/dev/nvme0n3 00:34:45.277 [job3] 00:34:45.277 filename=/dev/nvme0n4 00:34:45.277 Could not set queue depth (nvme0n1) 00:34:45.277 Could not set queue depth (nvme0n2) 00:34:45.277 Could not set queue depth (nvme0n3) 00:34:45.277 Could not set queue depth (nvme0n4) 00:34:45.537 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.537 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.537 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.537 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:45.537 fio-3.35 00:34:45.537 Starting 4 threads 00:34:46.921 00:34:46.921 job0: (groupid=0, jobs=1): err= 0: pid=666855: Fri Nov 15 11:15:06 2024 00:34:46.921 read: IOPS=8058, BW=31.5MiB/s (33.0MB/s)(31.7MiB/1006msec) 00:34:46.921 slat (nsec): min=884, max=8280.1k, avg=61176.05, stdev=481867.80 00:34:46.921 clat (usec): min=2101, max=20734, avg=8247.93, stdev=2779.67 00:34:46.921 lat (usec): min=2103, max=21479, avg=8309.11, stdev=2812.46 00:34:46.921 clat percentiles (usec): 00:34:46.921 | 1.00th=[ 3294], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 5866], 00:34:46.921 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 7570], 60.00th=[ 8848], 00:34:46.921 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[12256], 95.00th=[13566], 00:34:46.921 | 99.00th=[15008], 99.50th=[15926], 99.90th=[17433], 99.95th=[17433], 00:34:46.921 | 99.99th=[20841] 00:34:46.921 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:34:46.921 slat (nsec): min=1608, max=8505.9k, avg=53095.26, stdev=381556.50 00:34:46.921 clat (usec): min=858, max=25161, avg=7412.19, stdev=2988.26 00:34:46.921 lat (usec): min=869, max=25163, avg=7465.28, stdev=3010.20 00:34:46.921 clat percentiles (usec): 00:34:46.921 | 1.00th=[ 2180], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 5407], 00:34:46.921 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6652], 60.00th=[ 7439], 00:34:46.921 | 70.00th=[ 8291], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[12649], 00:34:46.921 | 99.00th=[19006], 99.50th=[21103], 99.90th=[21890], 99.95th=[22676], 00:34:46.921 | 99.99th=[25035] 00:34:46.921 bw ( KiB/s): min=29824, max=35712, per=34.19%, avg=32768.00, stdev=4163.44, samples=2 00:34:46.921 iops : min= 7456, max= 8928, avg=8192.00, stdev=1040.86, samples=2 00:34:46.921 lat (usec) : 1000=0.04% 00:34:46.921 lat (msec) : 2=0.36%, 4=3.71%, 10=75.89%, 20=19.66%, 50=0.34% 00:34:46.921 cpu : usr=5.17%, sys=8.26%, ctx=570, majf=0, minf=1 00:34:46.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:46.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.921 issued rwts: total=8107,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.921 job1: (groupid=0, jobs=1): err= 0: pid=666856: Fri Nov 15 11:15:06 2024 00:34:46.921 read: IOPS=5047, BW=19.7MiB/s (20.7MB/s)(20.6MiB/1047msec) 00:34:46.921 slat (nsec): min=939, max=12762k, avg=94344.25, stdev=744708.81 00:34:46.921 clat (usec): min=5076, max=54152, avg=13404.40, stdev=8583.79 00:34:46.921 lat (usec): min=5084, max=56775, avg=13498.74, stdev=8633.10 00:34:46.921 clat percentiles (usec): 00:34:46.921 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7504], 00:34:46.921 | 30.00th=[ 8094], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11207], 00:34:46.921 | 70.00th=[14353], 80.00th=[19268], 90.00th=[22938], 95.00th=[30802], 00:34:46.921 | 99.00th=[51643], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:34:46.921 | 99.99th=[54264] 00:34:46.921 write: IOPS=5379, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1047msec); 0 zone resets 00:34:46.921 slat (nsec): min=1631, max=15114k, avg=81955.24, stdev=667392.77 00:34:46.921 clat (usec): min=670, max=39283, avg=11006.36, stdev=5725.29 00:34:46.921 lat (usec): min=679, max=39315, avg=11088.31, stdev=5785.11 00:34:46.921 clat percentiles (usec): 00:34:46.921 | 1.00th=[ 4490], 5.00th=[ 6128], 10.00th=[ 7177], 20.00th=[ 7373], 00:34:46.921 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[ 9503], 00:34:46.921 | 70.00th=[10421], 80.00th=[14353], 90.00th=[19792], 95.00th=[25822], 00:34:46.921 | 99.00th=[29492], 99.50th=[29754], 99.90th=[32113], 99.95th=[38011], 00:34:46.921 | 99.99th=[39060] 00:34:46.921 bw ( KiB/s): min=16384, max=28672, per=23.50%, avg=22528.00, stdev=8688.93, samples=2 00:34:46.922 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:34:46.922 lat (usec) : 750=0.06% 00:34:46.922 lat (msec) : 2=0.02%, 4=0.05%, 10=55.28%, 20=32.43%, 50=11.58% 00:34:46.922 lat (msec) : 100=0.58% 00:34:46.922 cpu : usr=4.02%, sys=6.02%, ctx=243, majf=0, minf=2 00:34:46.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:46.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.922 issued rwts: total=5285,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.922 job2: (groupid=0, jobs=1): err= 0: pid=666857: Fri Nov 15 11:15:06 2024 00:34:46.922 read: IOPS=6605, BW=25.8MiB/s (27.1MB/s)(26.1MiB/1011msec) 00:34:46.922 slat (nsec): min=979, max=7827.1k, avg=67741.43, stdev=525916.53 00:34:46.922 clat (usec): min=3262, max=17798, avg=9207.71, stdev=2293.70 00:34:46.922 lat (usec): min=3266, max=18778, avg=9275.45, stdev=2330.30 00:34:46.922 clat percentiles (usec): 00:34:46.922 | 1.00th=[ 5014], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7439], 00:34:46.922 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:34:46.922 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[12649], 95.00th=[13566], 00:34:46.922 | 99.00th=[15795], 99.50th=[16188], 99.90th=[17171], 99.95th=[17171], 00:34:46.922 | 99.99th=[17695] 00:34:46.922 write: IOPS=7090, BW=27.7MiB/s (29.0MB/s)(28.0MiB/1011msec); 0 zone resets 00:34:46.922 slat (nsec): min=1712, max=8627.4k, avg=69172.53, stdev=492186.14 00:34:46.922 clat (usec): min=474, max=45848, avg=9284.52, stdev=6151.61 00:34:46.922 lat (usec): min=830, max=45858, avg=9353.69, stdev=6193.30 00:34:46.922 clat percentiles (usec): 00:34:46.922 | 1.00th=[ 3163], 5.00th=[ 4948], 10.00th=[ 5276], 20.00th=[ 6063], 00:34:46.922 | 30.00th=[ 6652], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8356], 00:34:46.922 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[12256], 95.00th=[26608], 00:34:46.922 | 99.00th=[38011], 99.50th=[39584], 99.90th=[45351], 99.95th=[45876], 00:34:46.922 | 99.99th=[45876] 00:34:46.922 bw ( KiB/s): min=25912, max=30584, per=29.47%, avg=28248.00, stdev=3303.60, samples=2 00:34:46.922 iops : min= 6478, max= 7646, avg=7062.00, stdev=825.90, samples=2 00:34:46.922 lat (usec) : 500=0.01%, 1000=0.03% 00:34:46.922 lat (msec) : 2=0.22%, 4=0.85%, 10=73.76%, 20=22.01%, 50=3.11% 00:34:46.922 cpu : usr=6.04%, sys=6.04%, ctx=414, majf=0, minf=1 00:34:46.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:46.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.922 issued rwts: total=6678,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.922 job3: (groupid=0, jobs=1): err= 0: pid=666858: Fri Nov 15 11:15:06 2024 00:34:46.922 read: IOPS=3813, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec) 00:34:46.922 slat (nsec): min=923, max=17423k, avg=116404.53, stdev=810163.80 00:34:46.922 clat (usec): min=1934, max=57128, avg=14399.61, stdev=7983.44 00:34:46.922 lat (usec): min=6392, max=61293, avg=14516.02, stdev=8055.00 00:34:46.922 clat percentiles (usec): 00:34:46.922 | 1.00th=[ 6783], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 8848], 00:34:46.922 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[13304], 00:34:46.922 | 70.00th=[15926], 80.00th=[20841], 90.00th=[26870], 95.00th=[28705], 00:34:46.922 | 99.00th=[38536], 99.50th=[45351], 99.90th=[56886], 99.95th=[56886], 00:34:46.922 | 99.99th=[56886] 00:34:46.922 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:34:46.922 slat (nsec): min=1566, max=13625k, avg=130890.37, stdev=716775.35 00:34:46.922 clat (usec): min=1284, max=88657, avg=17668.97, stdev=16439.33 00:34:46.922 lat (usec): min=1296, max=88667, avg=17799.86, stdev=16561.03 00:34:46.922 clat percentiles (usec): 00:34:46.922 | 1.00th=[ 6652], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8160], 00:34:46.922 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[13304], 00:34:46.922 | 70.00th=[20317], 80.00th=[24511], 90.00th=[31589], 95.00th=[54264], 00:34:46.922 | 99.00th=[87557], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:34:46.922 | 99.99th=[88605] 00:34:46.922 bw ( KiB/s): min=16384, max=16384, per=17.09%, avg=16384.00, stdev= 0.00, samples=2 00:34:46.922 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:34:46.922 lat (msec) : 2=0.04%, 10=50.58%, 20=22.55%, 50=23.87%, 100=2.96% 00:34:46.922 cpu : usr=2.69%, sys=3.38%, ctx=488, majf=0, minf=1 00:34:46.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:46.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.922 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.922 00:34:46.922 Run status group 0 (all jobs): 00:34:46.922 READ: bw=89.2MiB/s (93.5MB/s), 14.9MiB/s-31.5MiB/s (15.6MB/s-33.0MB/s), io=93.4MiB (97.9MB), run=1006-1047msec 00:34:46.922 WRITE: bw=93.6MiB/s (98.1MB/s), 15.9MiB/s-31.8MiB/s (16.7MB/s-33.4MB/s), io=98.0MiB (103MB), run=1006-1047msec 00:34:46.922 00:34:46.922 Disk stats (read/write): 00:34:46.922 nvme0n1: ios=6165/6423, merge=0/0, ticks=43138/39804, in_queue=82942, util=94.49% 00:34:46.922 nvme0n2: ios=4547/4608, merge=0/0, ticks=32507/27629, in_queue=60136, util=89.50% 00:34:46.922 nvme0n3: ios=6196/6433, merge=0/0, ticks=53639/47784, in_queue=101423, util=94.41% 00:34:46.922 nvme0n4: ios=3641/3735, merge=0/0, ticks=18652/22564, in_queue=41216, util=96.26% 00:34:46.922 11:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:46.922 [global] 00:34:46.922 thread=1 00:34:46.922 invalidate=1 00:34:46.922 rw=randwrite 00:34:46.922 time_based=1 00:34:46.922 runtime=1 00:34:46.922 ioengine=libaio 00:34:46.922 direct=1 00:34:46.922 bs=4096 00:34:46.922 iodepth=128 00:34:46.922 norandommap=0 00:34:46.922 numjobs=1 00:34:46.922 00:34:46.922 verify_dump=1 00:34:46.922 verify_backlog=512 00:34:46.922 verify_state_save=0 00:34:46.922 do_verify=1 00:34:46.922 verify=crc32c-intel 00:34:46.922 [job0] 00:34:46.922 filename=/dev/nvme0n1 00:34:46.922 [job1] 00:34:46.922 filename=/dev/nvme0n2 00:34:46.922 [job2] 00:34:46.922 filename=/dev/nvme0n3 00:34:46.922 [job3] 00:34:46.922 filename=/dev/nvme0n4 00:34:46.922 Could not set queue depth (nvme0n1) 00:34:46.922 Could not set queue depth (nvme0n2) 00:34:46.922 Could not set queue depth (nvme0n3) 00:34:46.922 Could not set queue depth (nvme0n4) 00:34:47.181 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.182 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.182 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.182 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.182 fio-3.35 00:34:47.182 Starting 4 threads 00:34:48.565 00:34:48.565 job0: (groupid=0, jobs=1): err= 0: pid=667380: Fri Nov 15 11:15:07 2024 00:34:48.565 read: IOPS=6246, BW=24.4MiB/s (25.6MB/s)(25.5MiB/1045msec) 00:34:48.565 slat (nsec): min=967, max=26389k, avg=75226.53, stdev=726164.35 00:34:48.565 clat (usec): min=2354, max=64851, avg=10958.12, stdev=9323.12 00:34:48.565 lat (usec): min=2361, max=64892, avg=11033.35, stdev=9386.03 00:34:48.565 clat percentiles (usec): 00:34:48.565 | 1.00th=[ 2999], 5.00th=[ 3621], 10.00th=[ 4424], 20.00th=[ 5407], 00:34:48.565 | 30.00th=[ 6390], 40.00th=[ 7177], 50.00th=[ 7767], 60.00th=[ 8848], 00:34:48.565 | 70.00th=[ 9765], 80.00th=[13304], 90.00th=[24249], 95.00th=[30016], 00:34:48.565 | 99.00th=[49546], 99.50th=[52167], 99.90th=[60556], 99.95th=[60556], 00:34:48.565 | 99.99th=[64750] 00:34:48.565 write: IOPS=6369, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1045msec); 0 zone resets 00:34:48.565 slat (nsec): min=1560, max=11278k, avg=59543.26, stdev=454831.22 00:34:48.565 clat (usec): min=777, max=59823, avg=9174.17, stdev=8296.83 00:34:48.565 lat (usec): min=805, max=60362, avg=9233.71, stdev=8334.55 00:34:48.565 clat percentiles (usec): 00:34:48.565 | 1.00th=[ 1958], 5.00th=[ 3261], 10.00th=[ 3884], 20.00th=[ 4621], 00:34:48.565 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 6456], 60.00th=[ 7898], 00:34:48.565 | 70.00th=[ 8356], 80.00th=[10814], 90.00th=[19530], 95.00th=[23725], 00:34:48.565 | 99.00th=[57410], 99.50th=[58459], 99.90th=[59507], 99.95th=[60031], 00:34:48.565 | 99.99th=[60031] 00:34:48.565 bw ( KiB/s): min=20480, max=32768, per=33.96%, avg=26624.00, stdev=8688.93, samples=2 00:34:48.565 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:34:48.565 lat (usec) : 1000=0.10% 00:34:48.565 lat (msec) : 2=0.43%, 4=8.65%, 10=64.34%, 20=15.65%, 50=9.90% 00:34:48.565 lat (msec) : 100=0.93% 00:34:48.565 cpu : usr=4.60%, sys=6.80%, ctx=415, majf=0, minf=1 00:34:48.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.565 issued rwts: total=6528,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.566 job1: (groupid=0, jobs=1): err= 0: pid=667382: Fri Nov 15 11:15:07 2024 00:34:48.566 read: IOPS=3869, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1004msec) 00:34:48.566 slat (nsec): min=1013, max=22258k, avg=119287.74, stdev=930594.63 00:34:48.566 clat (usec): min=1522, max=51966, avg=15150.95, stdev=9024.65 00:34:48.566 lat (usec): min=1669, max=51973, avg=15270.24, stdev=9091.04 00:34:48.566 clat percentiles (usec): 00:34:48.566 | 1.00th=[ 3130], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6849], 00:34:48.566 | 30.00th=[ 7832], 40.00th=[ 8848], 50.00th=[15008], 60.00th=[17171], 00:34:48.566 | 70.00th=[19006], 80.00th=[21627], 90.00th=[27657], 95.00th=[33424], 00:34:48.566 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[51119], 00:34:48.566 | 99.99th=[52167] 00:34:48.566 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:34:48.566 slat (nsec): min=1647, max=13825k, avg=124589.11, stdev=712986.66 00:34:48.566 clat (usec): min=3262, max=62764, avg=16585.94, stdev=12820.66 00:34:48.566 lat (usec): min=3290, max=62774, avg=16710.52, stdev=12910.80 00:34:48.566 clat percentiles (usec): 00:34:48.566 | 1.00th=[ 4293], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6980], 00:34:48.566 | 30.00th=[ 7439], 40.00th=[ 9503], 50.00th=[12256], 60.00th=[15533], 00:34:48.566 | 70.00th=[18482], 80.00th=[22938], 90.00th=[35914], 95.00th=[45351], 00:34:48.566 | 99.00th=[61080], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:34:48.566 | 99.99th=[62653] 00:34:48.566 bw ( KiB/s): min=16384, max=16384, per=20.90%, avg=16384.00, stdev= 0.00, samples=2 00:34:48.566 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:34:48.566 lat (msec) : 2=0.21%, 4=1.08%, 10=40.48%, 20=33.04%, 50=23.18% 00:34:48.566 lat (msec) : 100=2.00% 00:34:48.566 cpu : usr=3.09%, sys=4.59%, ctx=359, majf=0, minf=1 00:34:48.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:48.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.566 issued rwts: total=3885,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.566 job2: (groupid=0, jobs=1): err= 0: pid=667383: Fri Nov 15 11:15:07 2024 00:34:48.566 read: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(16.4MiB/1045msec) 00:34:48.566 slat (nsec): min=996, max=12939k, avg=97550.52, stdev=709799.10 00:34:48.566 clat (usec): min=1517, max=74065, avg=12852.39, stdev=10818.92 00:34:48.566 lat (usec): min=1525, max=74073, avg=12949.94, stdev=10875.56 00:34:48.566 clat percentiles (usec): 00:34:48.566 | 1.00th=[ 2073], 5.00th=[ 2507], 10.00th=[ 3392], 20.00th=[ 6390], 00:34:48.566 | 30.00th=[ 8356], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[11469], 00:34:48.566 | 70.00th=[12256], 80.00th=[16188], 90.00th=[22676], 95.00th=[31589], 00:34:48.566 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:34:48.566 | 99.99th=[73925] 00:34:48.566 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:34:48.566 slat (nsec): min=1592, max=13025k, avg=105764.05, stdev=655333.21 00:34:48.566 clat (usec): min=349, max=86084, avg=15104.44, stdev=11552.34 00:34:48.566 lat (usec): min=361, max=86086, avg=15210.20, stdev=11614.99 00:34:48.566 clat percentiles (usec): 00:34:48.566 | 1.00th=[ 1090], 5.00th=[ 3654], 10.00th=[ 5014], 20.00th=[ 6587], 00:34:48.566 | 30.00th=[ 7570], 40.00th=[ 8848], 50.00th=[11469], 60.00th=[14484], 00:34:48.566 | 70.00th=[19006], 80.00th=[22676], 90.00th=[28705], 95.00th=[35390], 00:34:48.566 | 99.00th=[57934], 99.50th=[73925], 99.90th=[84411], 99.95th=[86508], 00:34:48.566 | 99.99th=[86508] 00:34:48.566 bw ( KiB/s): min=19968, max=20848, per=26.03%, avg=20408.00, stdev=622.25, samples=2 00:34:48.566 iops : min= 4992, max= 5212, avg=5102.00, stdev=155.56, samples=2 00:34:48.566 lat (usec) : 500=0.09%, 750=0.10%, 1000=0.21% 00:34:48.566 lat (msec) : 2=1.63%, 4=7.35%, 10=33.94%, 20=35.18%, 50=19.08% 00:34:48.566 lat (msec) : 100=2.43% 00:34:48.566 cpu : usr=2.20%, sys=5.46%, ctx=515, majf=0, minf=2 00:34:48.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:48.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.566 issued rwts: total=4206,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.566 job3: (groupid=0, jobs=1): err= 0: pid=667385: Fri Nov 15 11:15:07 2024 00:34:48.566 read: IOPS=4081, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:34:48.566 slat (nsec): min=1010, max=11297k, avg=99434.69, stdev=701197.90 00:34:48.566 clat (usec): min=3348, max=37622, avg=11465.74, stdev=4983.40 00:34:48.566 lat (usec): min=3357, max=37634, avg=11565.17, stdev=5036.43 00:34:48.566 clat percentiles (usec): 00:34:48.566 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7439], 00:34:48.566 | 30.00th=[ 7898], 40.00th=[ 8848], 50.00th=[10159], 60.00th=[11863], 00:34:48.566 | 70.00th=[12780], 80.00th=[14353], 90.00th=[17695], 95.00th=[22152], 00:34:48.566 | 99.00th=[29230], 99.50th=[32900], 99.90th=[37487], 99.95th=[37487], 00:34:48.566 | 99.99th=[37487] 00:34:48.566 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:34:48.566 slat (nsec): min=1714, max=7926.4k, avg=123461.81, stdev=620704.32 00:34:48.566 clat (usec): min=1198, max=74253, avg=17447.16, stdev=13763.10 00:34:48.566 lat (usec): min=1207, max=74261, avg=17570.63, stdev=13854.12 00:34:48.566 clat percentiles (usec): 00:34:48.566 | 1.00th=[ 3982], 5.00th=[ 5407], 10.00th=[ 6718], 20.00th=[ 7767], 00:34:48.566 | 30.00th=[ 8848], 40.00th=[10814], 50.00th=[12911], 60.00th=[14484], 00:34:48.566 | 70.00th=[16319], 80.00th=[24511], 90.00th=[37487], 95.00th=[45876], 00:34:48.566 | 99.00th=[69731], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:34:48.566 | 99.99th=[73925] 00:34:48.566 bw ( KiB/s): min=16904, max=18992, per=22.90%, avg=17948.00, stdev=1476.44, samples=2 00:34:48.566 iops : min= 4226, max= 4748, avg=4487.00, stdev=369.11, samples=2 00:34:48.566 lat (msec) : 2=0.10%, 4=0.64%, 10=41.15%, 20=41.07%, 50=15.24% 00:34:48.566 lat (msec) : 100=1.80% 00:34:48.566 cpu : usr=3.39%, sys=4.68%, ctx=422, majf=0, minf=1 00:34:48.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:48.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.566 issued rwts: total=4102,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.566 00:34:48.566 Run status group 0 (all jobs): 00:34:48.566 READ: bw=70.0MiB/s (73.4MB/s), 15.1MiB/s-24.4MiB/s (15.8MB/s-25.6MB/s), io=73.1MiB (76.7MB), run=1004-1045msec 00:34:48.566 WRITE: bw=76.6MiB/s (80.3MB/s), 15.9MiB/s-24.9MiB/s (16.7MB/s-26.1MB/s), io=80.0MiB (83.9MB), run=1004-1045msec 00:34:48.566 00:34:48.566 Disk stats (read/write): 00:34:48.566 nvme0n1: ios=4951/5120, merge=0/0, ticks=40370/34511, in_queue=74881, util=80.56% 00:34:48.566 nvme0n2: ios=2648/3072, merge=0/0, ticks=17722/22633, in_queue=40355, util=90.99% 00:34:48.566 nvme0n3: ios=3648/4546, merge=0/0, ticks=25116/39371, in_queue=64487, util=95.44% 00:34:48.566 nvme0n4: ios=3633/3775, merge=0/0, ticks=37574/62046, in_queue=99620, util=97.96% 00:34:48.566 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:48.566 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=667715 00:34:48.566 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:48.566 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:48.566 [global] 00:34:48.566 thread=1 00:34:48.566 invalidate=1 00:34:48.566 rw=read 00:34:48.566 time_based=1 00:34:48.566 runtime=10 00:34:48.566 ioengine=libaio 00:34:48.566 direct=1 00:34:48.566 bs=4096 00:34:48.566 iodepth=1 00:34:48.566 norandommap=1 00:34:48.566 numjobs=1 00:34:48.566 00:34:48.566 [job0] 00:34:48.566 filename=/dev/nvme0n1 00:34:48.566 [job1] 00:34:48.566 filename=/dev/nvme0n2 00:34:48.566 [job2] 00:34:48.566 filename=/dev/nvme0n3 00:34:48.566 [job3] 00:34:48.566 filename=/dev/nvme0n4 00:34:48.848 Could not set queue depth (nvme0n1) 00:34:48.848 Could not set queue depth (nvme0n2) 00:34:48.848 Could not set queue depth (nvme0n3) 00:34:48.848 Could not set queue depth (nvme0n4) 00:34:49.110 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.110 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.110 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.110 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.110 fio-3.35 00:34:49.110 Starting 4 threads 00:34:51.654 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:51.915 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:51.915 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7065600, buflen=4096 00:34:51.915 fio: pid=667907, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.915 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.915 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:51.915 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7024640, buflen=4096 00:34:51.915 fio: pid=667906, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:52.177 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10129408, buflen=4096 00:34:52.177 fio: pid=667903, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:52.177 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.177 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:52.439 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.439 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:52.439 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13508608, buflen=4096 00:34:52.439 fio: pid=667905, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:52.439 00:34:52.439 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=667903: Fri Nov 15 11:15:11 2024 00:34:52.439 read: IOPS=835, BW=3340KiB/s (3420kB/s)(9892KiB/2962msec) 00:34:52.439 slat (usec): min=6, max=32583, avg=62.76, stdev=1023.54 00:34:52.439 clat (usec): min=145, max=42105, avg=1120.50, stdev=4856.35 00:34:52.439 lat (usec): min=152, max=42130, avg=1183.28, stdev=4960.31 00:34:52.439 clat percentiles (usec): 00:34:52.439 | 1.00th=[ 212], 5.00th=[ 318], 10.00th=[ 367], 20.00th=[ 433], 00:34:52.439 | 30.00th=[ 474], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 529], 00:34:52.439 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 857], 95.00th=[ 1004], 00:34:52.439 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:52.439 | 99.99th=[42206] 00:34:52.439 bw ( KiB/s): min= 112, max= 7440, per=26.20%, avg=3038.40, stdev=3899.94, samples=5 00:34:52.439 iops : min= 28, max= 1860, avg=759.60, stdev=974.99, samples=5 00:34:52.439 lat (usec) : 250=2.34%, 500=42.64%, 750=43.94%, 1000=5.66% 00:34:52.439 lat (msec) : 2=3.88%, 4=0.04%, 10=0.04%, 50=1.41% 00:34:52.439 cpu : usr=1.08%, sys=2.16%, ctx=2480, majf=0, minf=1 00:34:52.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.439 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=667905: Fri Nov 15 11:15:11 2024 00:34:52.439 read: IOPS=1038, BW=4152KiB/s (4252kB/s)(12.9MiB/3177msec) 00:34:52.439 slat (usec): min=6, max=14529, avg=34.16, stdev=364.77 00:34:52.439 clat (usec): min=195, max=41609, avg=916.93, stdev=3074.49 00:34:52.439 lat (usec): min=202, max=47895, avg=951.09, stdev=3122.01 00:34:52.439 clat percentiles (usec): 00:34:52.439 | 1.00th=[ 326], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 486], 00:34:52.439 | 30.00th=[ 515], 40.00th=[ 570], 50.00th=[ 660], 60.00th=[ 742], 00:34:52.439 | 70.00th=[ 816], 80.00th=[ 930], 90.00th=[ 971], 95.00th=[ 1020], 00:34:52.439 | 99.00th=[ 1156], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:52.439 | 99.99th=[41681] 00:34:52.439 bw ( KiB/s): min= 1738, max= 7768, per=37.48%, avg=4347.00, stdev=2175.91, samples=6 00:34:52.439 iops : min= 434, max= 1942, avg=1086.67, stdev=544.10, samples=6 00:34:52.439 lat (usec) : 250=0.15%, 500=24.49%, 750=36.10%, 1000=32.68% 00:34:52.439 lat (msec) : 2=5.94%, 4=0.03%, 50=0.58% 00:34:52.439 cpu : usr=1.39%, sys=2.93%, ctx=3302, majf=0, minf=2 00:34:52.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 issued rwts: total=3299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.439 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=667906: Fri Nov 15 11:15:11 2024 00:34:52.439 read: IOPS=612, BW=2449KiB/s (2508kB/s)(6860KiB/2801msec) 00:34:52.439 slat (usec): min=6, max=15451, avg=41.10, stdev=511.75 00:34:52.439 clat (usec): min=325, max=41608, avg=1573.91, stdev=5709.91 00:34:52.439 lat (usec): min=351, max=41635, avg=1615.01, stdev=5730.03 00:34:52.439 clat percentiles (usec): 00:34:52.439 | 1.00th=[ 482], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 660], 00:34:52.439 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 783], 00:34:52.439 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 922], 95.00th=[ 988], 00:34:52.439 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:52.439 | 99.99th=[41681] 00:34:52.439 bw ( KiB/s): min= 520, max= 5000, per=22.14%, avg=2568.00, stdev=1856.85, samples=5 00:34:52.439 iops : min= 130, max= 1250, avg=642.00, stdev=464.21, samples=5 00:34:52.439 lat (usec) : 500=1.22%, 750=46.27%, 1000=48.08% 00:34:52.439 lat (msec) : 2=2.33%, 50=2.04% 00:34:52.439 cpu : usr=0.61%, sys=1.68%, ctx=1718, majf=0, minf=2 00:34:52.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.439 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=667907: Fri Nov 15 11:15:11 2024 00:34:52.439 read: IOPS=659, BW=2637KiB/s (2700kB/s)(6900KiB/2617msec) 00:34:52.439 slat (nsec): min=6268, max=62528, avg=26502.99, stdev=5530.21 00:34:52.439 clat (usec): min=278, max=41442, avg=1472.09, stdev=4895.61 00:34:52.439 lat (usec): min=289, max=41448, avg=1498.59, stdev=4895.84 00:34:52.439 clat percentiles (usec): 00:34:52.439 | 1.00th=[ 375], 5.00th=[ 537], 10.00th=[ 652], 20.00th=[ 750], 00:34:52.439 | 30.00th=[ 799], 40.00th=[ 865], 50.00th=[ 914], 60.00th=[ 955], 00:34:52.439 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1074], 00:34:52.439 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:34:52.439 | 99.99th=[41681] 00:34:52.439 bw ( KiB/s): min= 744, max= 4048, per=23.60%, avg=2737.60, stdev=1612.11, samples=5 00:34:52.439 iops : min= 186, max= 1012, avg=684.40, stdev=403.03, samples=5 00:34:52.439 lat (usec) : 500=3.65%, 750=17.21%, 1000=57.88% 00:34:52.439 lat (msec) : 2=19.70%, 50=1.51% 00:34:52.439 cpu : usr=0.65%, sys=2.87%, ctx=1726, majf=0, minf=1 00:34:52.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.439 issued rwts: total=1726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.439 00:34:52.439 Run status group 0 (all jobs): 00:34:52.439 READ: bw=11.3MiB/s (11.9MB/s), 2449KiB/s-4152KiB/s (2508kB/s-4252kB/s), io=36.0MiB (37.7MB), run=2617-3177msec 00:34:52.439 00:34:52.439 Disk stats (read/write): 00:34:52.439 nvme0n1: ios=2224/0, merge=0/0, ticks=2631/0, in_queue=2631, util=91.59% 00:34:52.439 nvme0n2: ios=3296/0, merge=0/0, ticks=2826/0, in_queue=2826, util=94.64% 00:34:52.439 nvme0n3: ios=1610/0, merge=0/0, ticks=2489/0, in_queue=2489, util=96.03% 00:34:52.439 nvme0n4: ios=1724/0, merge=0/0, ticks=2427/0, in_queue=2427, util=96.42% 00:34:52.439 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.439 11:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:52.699 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.699 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:52.959 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.960 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:52.960 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.960 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 667715 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:53.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:53.220 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:53.496 nvmf hotplug test: fio failed as expected 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:53.496 11:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.496 rmmod nvme_tcp 00:34:53.496 rmmod nvme_fabrics 00:34:53.496 rmmod nvme_keyring 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 664204 ']' 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 664204 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 664204 ']' 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 664204 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 664204 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 664204' 00:34:53.778 killing process with pid 664204 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 664204 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 664204 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.778 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.807 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.807 00:34:55.807 real 0m28.381s 00:34:55.807 user 2m22.289s 00:34:55.807 sys 0m12.604s 00:34:55.807 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:55.807 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:55.807 ************************************ 00:34:55.807 END TEST nvmf_fio_target 00:34:55.807 ************************************ 00:34:56.067 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:56.067 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:56.067 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:56.068 ************************************ 00:34:56.068 START TEST nvmf_bdevio 00:34:56.068 ************************************ 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:56.068 * Looking for test storage... 00:34:56.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:56.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.068 --rc genhtml_branch_coverage=1 00:34:56.068 --rc genhtml_function_coverage=1 00:34:56.068 --rc genhtml_legend=1 00:34:56.068 --rc geninfo_all_blocks=1 00:34:56.068 --rc geninfo_unexecuted_blocks=1 00:34:56.068 00:34:56.068 ' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:56.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.068 --rc genhtml_branch_coverage=1 00:34:56.068 --rc genhtml_function_coverage=1 00:34:56.068 --rc genhtml_legend=1 00:34:56.068 --rc geninfo_all_blocks=1 00:34:56.068 --rc geninfo_unexecuted_blocks=1 00:34:56.068 00:34:56.068 ' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:56.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.068 --rc genhtml_branch_coverage=1 00:34:56.068 --rc genhtml_function_coverage=1 00:34:56.068 --rc genhtml_legend=1 00:34:56.068 --rc geninfo_all_blocks=1 00:34:56.068 --rc geninfo_unexecuted_blocks=1 00:34:56.068 00:34:56.068 ' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:56.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.068 --rc genhtml_branch_coverage=1 00:34:56.068 --rc genhtml_function_coverage=1 00:34:56.068 --rc genhtml_legend=1 00:34:56.068 --rc geninfo_all_blocks=1 00:34:56.068 --rc geninfo_unexecuted_blocks=1 00:34:56.068 00:34:56.068 ' 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.068 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:56.331 11:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:04.482 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:04.482 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.482 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:04.483 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:04.483 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:04.483 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:04.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:35:04.483 00:35:04.483 --- 10.0.0.2 ping statistics --- 00:35:04.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.483 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:35:04.483 00:35:04.483 --- 10.0.0.1 ping statistics --- 00:35:04.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.483 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=673406 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 673406 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 673406 ']' 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:04.483 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.483 [2024-11-15 11:15:23.237917] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:04.483 [2024-11-15 11:15:23.239061] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:35:04.483 [2024-11-15 11:15:23.239116] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.483 [2024-11-15 11:15:23.341000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:04.483 [2024-11-15 11:15:23.393265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.483 [2024-11-15 11:15:23.393321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.483 [2024-11-15 11:15:23.393330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.483 [2024-11-15 11:15:23.393338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.483 [2024-11-15 11:15:23.393345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.483 [2024-11-15 11:15:23.395747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:04.483 [2024-11-15 11:15:23.395907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:04.484 [2024-11-15 11:15:23.396068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:04.484 [2024-11-15 11:15:23.396068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:04.484 [2024-11-15 11:15:23.474325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:04.484 [2024-11-15 11:15:23.475403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:04.484 [2024-11-15 11:15:23.475547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:04.484 [2024-11-15 11:15:23.476171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:04.484 [2024-11-15 11:15:23.476191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.746 [2024-11-15 11:15:24.120969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.746 Malloc0 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.746 [2024-11-15 11:15:24.217319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:04.746 { 00:35:04.746 "params": { 00:35:04.746 "name": "Nvme$subsystem", 00:35:04.746 "trtype": "$TEST_TRANSPORT", 00:35:04.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.746 "adrfam": "ipv4", 00:35:04.746 "trsvcid": "$NVMF_PORT", 00:35:04.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.746 "hdgst": ${hdgst:-false}, 00:35:04.746 "ddgst": ${ddgst:-false} 00:35:04.746 }, 00:35:04.746 "method": "bdev_nvme_attach_controller" 00:35:04.746 } 00:35:04.746 EOF 00:35:04.746 )") 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:04.746 11:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:04.746 "params": { 00:35:04.746 "name": "Nvme1", 00:35:04.746 "trtype": "tcp", 00:35:04.746 "traddr": "10.0.0.2", 00:35:04.746 "adrfam": "ipv4", 00:35:04.746 "trsvcid": "4420", 00:35:04.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.746 "hdgst": false, 00:35:04.746 "ddgst": false 00:35:04.746 }, 00:35:04.746 "method": "bdev_nvme_attach_controller" 00:35:04.746 }' 00:35:05.008 [2024-11-15 11:15:24.275498] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:35:05.008 [2024-11-15 11:15:24.275575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673742 ] 00:35:05.008 [2024-11-15 11:15:24.370621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:05.008 [2024-11-15 11:15:24.426292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.008 [2024-11-15 11:15:24.426456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.008 [2024-11-15 11:15:24.426457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.268 I/O targets: 00:35:05.268 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:05.268 00:35:05.268 00:35:05.268 CUnit - A unit testing framework for C - Version 2.1-3 00:35:05.268 http://cunit.sourceforge.net/ 00:35:05.268 00:35:05.268 00:35:05.268 Suite: bdevio tests on: Nvme1n1 00:35:05.268 Test: blockdev write read block ...passed 00:35:05.268 Test: blockdev write zeroes read block ...passed 00:35:05.268 Test: blockdev write zeroes read no split ...passed 00:35:05.268 Test: blockdev write zeroes read split ...passed 00:35:05.268 Test: blockdev write zeroes read split partial ...passed 00:35:05.268 Test: blockdev reset ...[2024-11-15 11:15:24.766719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:05.268 [2024-11-15 11:15:24.766821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137c970 (9): Bad file descriptor 00:35:05.529 [2024-11-15 11:15:24.821222] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:05.529 passed 00:35:05.529 Test: blockdev write read 8 blocks ...passed 00:35:05.529 Test: blockdev write read size > 128k ...passed 00:35:05.529 Test: blockdev write read invalid size ...passed 00:35:05.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:05.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:05.529 Test: blockdev write read max offset ...passed 00:35:05.529 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:05.529 Test: blockdev writev readv 8 blocks ...passed 00:35:05.529 Test: blockdev writev readv 30 x 1block ...passed 00:35:05.790 Test: blockdev writev readv block ...passed 00:35:05.790 Test: blockdev writev readv size > 128k ...passed 00:35:05.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:05.790 Test: blockdev comparev and writev ...[2024-11-15 11:15:25.083035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.083084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.083100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.083109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.083603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.083617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.083632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.083641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.084028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.084040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.084056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.084066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.084443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.084455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:05.790 [2024-11-15 11:15:25.084468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.790 [2024-11-15 11:15:25.084476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:05.790 passed 00:35:05.790 Test: blockdev nvme passthru rw ...passed 00:35:05.791 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:15:25.170093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.791 [2024-11-15 11:15:25.170110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:05.791 [2024-11-15 11:15:25.170308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.791 [2024-11-15 11:15:25.170319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:05.791 [2024-11-15 11:15:25.170543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.791 [2024-11-15 11:15:25.170553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:05.791 [2024-11-15 11:15:25.170753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.791 [2024-11-15 11:15:25.170771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:05.791 passed 00:35:05.791 Test: blockdev nvme admin passthru ...passed 00:35:05.791 Test: blockdev copy ...passed 00:35:05.791 00:35:05.791 Run Summary: Type Total Ran Passed Failed Inactive 00:35:05.791 suites 1 1 n/a 0 0 00:35:05.791 tests 23 23 23 0 0 00:35:05.791 asserts 152 152 152 0 n/a 00:35:05.791 00:35:05.791 Elapsed time = 1.207 seconds 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.052 rmmod nvme_tcp 00:35:06.052 rmmod nvme_fabrics 00:35:06.052 rmmod nvme_keyring 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 673406 ']' 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 673406 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 673406 ']' 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 673406 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 673406 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 673406' 00:35:06.052 killing process with pid 673406 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 673406 00:35:06.052 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 673406 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.316 11:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.864 11:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.864 00:35:08.864 real 0m12.398s 00:35:08.864 user 0m9.766s 00:35:08.864 sys 0m6.640s 00:35:08.864 11:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.864 11:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.864 ************************************ 00:35:08.864 END TEST nvmf_bdevio 00:35:08.864 ************************************ 00:35:08.864 11:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:08.864 00:35:08.864 real 5m1.453s 00:35:08.864 user 10m16.137s 00:35:08.864 sys 2m4.349s 00:35:08.864 11:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.864 11:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:08.864 ************************************ 00:35:08.864 END TEST nvmf_target_core_interrupt_mode 00:35:08.864 ************************************ 00:35:08.864 11:15:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.864 11:15:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:08.864 11:15:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:08.864 11:15:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.864 ************************************ 00:35:08.864 START TEST nvmf_interrupt 00:35:08.864 ************************************ 00:35:08.864 11:15:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.864 * Looking for test storage... 00:35:08.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:08.864 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:08.864 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.865 --rc genhtml_branch_coverage=1 00:35:08.865 --rc genhtml_function_coverage=1 00:35:08.865 --rc genhtml_legend=1 00:35:08.865 --rc geninfo_all_blocks=1 00:35:08.865 --rc geninfo_unexecuted_blocks=1 00:35:08.865 00:35:08.865 ' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.865 --rc genhtml_branch_coverage=1 00:35:08.865 --rc genhtml_function_coverage=1 00:35:08.865 --rc genhtml_legend=1 00:35:08.865 --rc geninfo_all_blocks=1 00:35:08.865 --rc geninfo_unexecuted_blocks=1 00:35:08.865 00:35:08.865 ' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.865 --rc genhtml_branch_coverage=1 00:35:08.865 --rc genhtml_function_coverage=1 00:35:08.865 --rc genhtml_legend=1 00:35:08.865 --rc geninfo_all_blocks=1 00:35:08.865 --rc geninfo_unexecuted_blocks=1 00:35:08.865 00:35:08.865 ' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.865 --rc genhtml_branch_coverage=1 00:35:08.865 --rc genhtml_function_coverage=1 00:35:08.865 --rc genhtml_legend=1 00:35:08.865 --rc geninfo_all_blocks=1 00:35:08.865 --rc geninfo_unexecuted_blocks=1 00:35:08.865 00:35:08.865 ' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.865 11:15:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.008 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.008 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:17.008 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:17.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:17.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:17.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:17.009 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:17.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:35:17.009 00:35:17.009 --- 10.0.0.2 ping statistics --- 00:35:17.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.009 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:35:17.009 00:35:17.009 --- 10.0.0.1 ping statistics --- 00:35:17.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.009 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=678107 00:35:17.009 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 678107 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 678107 ']' 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:17.010 11:15:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.010 [2024-11-15 11:15:35.768331] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:17.010 [2024-11-15 11:15:35.769452] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:35:17.010 [2024-11-15 11:15:35.769504] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.010 [2024-11-15 11:15:35.870852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:17.010 [2024-11-15 11:15:35.921411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.010 [2024-11-15 11:15:35.921463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.010 [2024-11-15 11:15:35.921472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.010 [2024-11-15 11:15:35.921479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.010 [2024-11-15 11:15:35.921485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.010 [2024-11-15 11:15:35.923166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.010 [2024-11-15 11:15:35.923170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.010 [2024-11-15 11:15:36.001461] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:17.010 [2024-11-15 11:15:36.002055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:17.010 [2024-11-15 11:15:36.002345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:17.270 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:17.271 5000+0 records in 00:35:17.271 5000+0 records out 00:35:17.271 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0197048 s, 520 MB/s 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.271 AIO0 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.271 [2024-11-15 11:15:36.720215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.271 [2024-11-15 11:15:36.764724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 678107 0 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 678107 0 idle 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:17.271 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678107 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.33 reactor_0' 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678107 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.33 reactor_0 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 678107 1 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 678107 1 idle 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:17.532 11:15:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678111 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678111 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=678472 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 678107 0 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 678107 0 busy 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:17.793 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678107 root 20 0 128.2g 43776 32256 R 50.0 0.0 0:00.41 reactor_0' 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678107 root 20 0 128.2g 43776 32256 R 50.0 0.0 0:00.41 reactor_0 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=50.0 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=50 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.054 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 678107 1 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 678107 1 busy 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678111 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.23 reactor_1' 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678111 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.23 reactor_1 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.055 11:15:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 678472 00:35:28.050 Initializing NVMe Controllers 00:35:28.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:28.050 Controller IO queue size 256, less than required. 00:35:28.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:28.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:28.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:28.050 Initialization complete. Launching workers. 00:35:28.050 ======================================================== 00:35:28.050 Latency(us) 00:35:28.050 Device Information : IOPS MiB/s Average min max 00:35:28.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19074.70 74.51 13425.62 4275.33 32321.78 00:35:28.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19665.50 76.82 13019.16 8500.85 28398.74 00:35:28.050 ======================================================== 00:35:28.050 Total : 38740.20 151.33 13219.29 4275.33 32321.78 00:35:28.050 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 678107 0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 678107 0 idle 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678107 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.32 reactor_0' 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678107 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.32 reactor_0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 678107 1 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 678107 1 idle 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:28.050 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:28.051 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:28.311 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678111 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:28.311 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678111 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:28.311 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.312 11:15:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:29.252 11:15:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:29.252 11:15:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:29.252 11:15:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:29.252 11:15:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:29.252 11:15:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 678107 0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 678107 0 idle 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678107 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.69 reactor_0' 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678107 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.69 reactor_0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 678107 1 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 678107 1 idle 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=678107 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 678107 -w 256 00:35:31.162 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 678111 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 678111 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.423 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:31.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:31.684 11:15:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:31.685 11:15:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:31.685 11:15:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:31.685 11:15:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:31.685 11:15:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.685 11:15:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:31.685 rmmod nvme_tcp 00:35:31.685 rmmod nvme_fabrics 00:35:31.685 rmmod nvme_keyring 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 678107 ']' 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 678107 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 678107 ']' 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 678107 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 678107 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 678107' 00:35:31.685 killing process with pid 678107 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 678107 00:35:31.685 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 678107 00:35:31.945 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.946 11:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.490 11:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.490 00:35:34.490 real 0m25.487s 00:35:34.490 user 0m40.349s 00:35:34.490 sys 0m9.851s 00:35:34.490 11:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:34.490 11:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.490 ************************************ 00:35:34.490 END TEST nvmf_interrupt 00:35:34.490 ************************************ 00:35:34.490 00:35:34.490 real 30m5.384s 00:35:34.490 user 61m16.235s 00:35:34.490 sys 10m18.395s 00:35:34.490 11:15:53 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:34.490 11:15:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.490 ************************************ 00:35:34.490 END TEST nvmf_tcp 00:35:34.490 ************************************ 00:35:34.490 11:15:53 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:34.490 11:15:53 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.491 11:15:53 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:34.491 11:15:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:34.491 11:15:53 -- common/autotest_common.sh@10 -- # set +x 00:35:34.491 ************************************ 00:35:34.491 START TEST spdkcli_nvmf_tcp 00:35:34.491 ************************************ 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.491 * Looking for test storage... 00:35:34.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:34.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.491 --rc genhtml_branch_coverage=1 00:35:34.491 --rc genhtml_function_coverage=1 00:35:34.491 --rc genhtml_legend=1 00:35:34.491 --rc geninfo_all_blocks=1 00:35:34.491 --rc geninfo_unexecuted_blocks=1 00:35:34.491 00:35:34.491 ' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:34.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.491 --rc genhtml_branch_coverage=1 00:35:34.491 --rc genhtml_function_coverage=1 00:35:34.491 --rc genhtml_legend=1 00:35:34.491 --rc geninfo_all_blocks=1 00:35:34.491 --rc geninfo_unexecuted_blocks=1 00:35:34.491 00:35:34.491 ' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:34.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.491 --rc genhtml_branch_coverage=1 00:35:34.491 --rc genhtml_function_coverage=1 00:35:34.491 --rc genhtml_legend=1 00:35:34.491 --rc geninfo_all_blocks=1 00:35:34.491 --rc geninfo_unexecuted_blocks=1 00:35:34.491 00:35:34.491 ' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:34.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.491 --rc genhtml_branch_coverage=1 00:35:34.491 --rc genhtml_function_coverage=1 00:35:34.491 --rc genhtml_legend=1 00:35:34.491 --rc geninfo_all_blocks=1 00:35:34.491 --rc geninfo_unexecuted_blocks=1 00:35:34.491 00:35:34.491 ' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=681656 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 681656 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 681656 ']' 00:35:34.491 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.492 11:15:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:34.492 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.492 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.492 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.492 11:15:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.492 [2024-11-15 11:15:53.819304] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:35:34.492 [2024-11-15 11:15:53.819375] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681656 ] 00:35:34.492 [2024-11-15 11:15:53.910592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:34.492 [2024-11-15 11:15:53.964618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.492 [2024-11-15 11:15:53.964669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.432 11:15:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:35.432 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:35.432 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:35.432 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:35.432 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:35.432 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:35.432 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:35.432 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.432 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.432 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:35.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:35.432 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:35.432 ' 00:35:37.976 [2024-11-15 11:15:57.359735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.360 [2024-11-15 11:15:58.715743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:41.909 [2024-11-15 11:16:01.243006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:44.458 [2024-11-15 11:16:03.473274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:45.844 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:45.844 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:45.844 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:45.844 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:45.844 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:45.844 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:45.844 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:45.844 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:45.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:45.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.845 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.845 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:45.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:45.845 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:45.845 11:16:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.416 11:16:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:46.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:46.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:46.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:46.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:46.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:46.416 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:46.416 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:46.416 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:46.416 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:46.416 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:46.416 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:46.416 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:46.416 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:46.416 ' 00:35:53.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:53.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:53.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:53.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:53.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:53.000 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:53.000 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:53.000 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:53.000 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:53.000 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:53.000 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:53.000 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:53.000 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:53.000 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 681656 ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 681656' 00:35:53.000 killing process with pid 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 681656 ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 681656 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 681656 ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 681656 00:35:53.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (681656) - No such process 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 681656 is not found' 00:35:53.000 Process with pid 681656 is not found 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:53.000 00:35:53.000 real 0m18.147s 00:35:53.000 user 0m40.272s 00:35:53.000 sys 0m0.906s 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:53.000 11:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.000 ************************************ 00:35:53.001 END TEST spdkcli_nvmf_tcp 00:35:53.001 ************************************ 00:35:53.001 11:16:11 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:53.001 11:16:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:53.001 11:16:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:53.001 11:16:11 -- common/autotest_common.sh@10 -- # set +x 00:35:53.001 ************************************ 00:35:53.001 START TEST nvmf_identify_passthru 00:35:53.001 ************************************ 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:53.001 * Looking for test storage... 00:35:53.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.001 --rc genhtml_branch_coverage=1 00:35:53.001 --rc genhtml_function_coverage=1 00:35:53.001 --rc genhtml_legend=1 00:35:53.001 --rc geninfo_all_blocks=1 00:35:53.001 --rc geninfo_unexecuted_blocks=1 00:35:53.001 00:35:53.001 ' 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.001 --rc genhtml_branch_coverage=1 00:35:53.001 --rc genhtml_function_coverage=1 00:35:53.001 --rc genhtml_legend=1 00:35:53.001 --rc geninfo_all_blocks=1 00:35:53.001 --rc geninfo_unexecuted_blocks=1 00:35:53.001 00:35:53.001 ' 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.001 --rc genhtml_branch_coverage=1 00:35:53.001 --rc genhtml_function_coverage=1 00:35:53.001 --rc genhtml_legend=1 00:35:53.001 --rc geninfo_all_blocks=1 00:35:53.001 --rc geninfo_unexecuted_blocks=1 00:35:53.001 00:35:53.001 ' 00:35:53.001 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.001 --rc genhtml_branch_coverage=1 00:35:53.001 --rc genhtml_function_coverage=1 00:35:53.001 --rc genhtml_legend=1 00:35:53.001 --rc geninfo_all_blocks=1 00:35:53.001 --rc geninfo_unexecuted_blocks=1 00:35:53.001 00:35:53.001 ' 00:35:53.001 11:16:11 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.001 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.001 11:16:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.001 11:16:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.001 11:16:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.001 11:16:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.001 11:16:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:53.002 11:16:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.002 11:16:11 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.002 11:16:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.002 11:16:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.002 11:16:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.002 11:16:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.002 11:16:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.002 11:16:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.002 11:16:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.002 11:16:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:53.002 11:16:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.002 11:16:11 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.002 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:53.002 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:53.002 11:16:11 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.002 11:16:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:59.588 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:59.588 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:59.588 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:59.588 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.588 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:59.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:35:59.849 00:35:59.849 --- 10.0.0.2 ping statistics --- 00:35:59.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.849 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:35:59.849 00:35:59.849 --- 10.0.0.1 ping statistics --- 00:35:59.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.849 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:59.849 11:16:19 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:00.110 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.110 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:36:00.110 11:16:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:36:00.110 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:00.110 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:00.110 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:00.111 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:00.111 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:00.683 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:00.684 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:00.684 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:00.684 11:16:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=689081 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:01.255 11:16:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 689081 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 689081 ']' 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:01.255 11:16:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.255 [2024-11-15 11:16:20.603593] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:36:01.255 [2024-11-15 11:16:20.603661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.255 [2024-11-15 11:16:20.702428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.255 [2024-11-15 11:16:20.756175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.255 [2024-11-15 11:16:20.756229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.255 [2024-11-15 11:16:20.756238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.255 [2024-11-15 11:16:20.756245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.255 [2024-11-15 11:16:20.756251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.255 [2024-11-15 11:16:20.758405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.255 [2024-11-15 11:16:20.758585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.255 [2024-11-15 11:16:20.758701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.255 [2024-11-15 11:16:20.758895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.197 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:02.197 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:36:02.197 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:02.197 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.197 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.197 INFO: Log level set to 20 00:36:02.197 INFO: Requests: 00:36:02.197 { 00:36:02.197 "jsonrpc": "2.0", 00:36:02.197 "method": "nvmf_set_config", 00:36:02.197 "id": 1, 00:36:02.197 "params": { 00:36:02.197 "admin_cmd_passthru": { 00:36:02.197 "identify_ctrlr": true 00:36:02.197 } 00:36:02.197 } 00:36:02.197 } 00:36:02.198 00:36:02.198 INFO: response: 00:36:02.198 { 00:36:02.198 "jsonrpc": "2.0", 00:36:02.198 "id": 1, 00:36:02.198 "result": true 00:36:02.198 } 00:36:02.198 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.198 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.198 INFO: Setting log level to 20 00:36:02.198 INFO: Setting log level to 20 00:36:02.198 INFO: Log level set to 20 00:36:02.198 INFO: Log level set to 20 00:36:02.198 INFO: Requests: 00:36:02.198 { 00:36:02.198 "jsonrpc": "2.0", 00:36:02.198 "method": "framework_start_init", 00:36:02.198 "id": 1 00:36:02.198 } 00:36:02.198 00:36:02.198 INFO: Requests: 00:36:02.198 { 00:36:02.198 "jsonrpc": "2.0", 00:36:02.198 "method": "framework_start_init", 00:36:02.198 "id": 1 00:36:02.198 } 00:36:02.198 00:36:02.198 [2024-11-15 11:16:21.481282] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:02.198 INFO: response: 00:36:02.198 { 00:36:02.198 "jsonrpc": "2.0", 00:36:02.198 "id": 1, 00:36:02.198 "result": true 00:36:02.198 } 00:36:02.198 00:36:02.198 INFO: response: 00:36:02.198 { 00:36:02.198 "jsonrpc": "2.0", 00:36:02.198 "id": 1, 00:36:02.198 "result": true 00:36:02.198 } 00:36:02.198 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.198 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.198 INFO: Setting log level to 40 00:36:02.198 INFO: Setting log level to 40 00:36:02.198 INFO: Setting log level to 40 00:36:02.198 [2024-11-15 11:16:21.494634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.198 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.198 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.198 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.459 Nvme0n1 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.459 [2024-11-15 11:16:21.884740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.459 [ 00:36:02.459 { 00:36:02.459 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:02.459 "subtype": "Discovery", 00:36:02.459 "listen_addresses": [], 00:36:02.459 "allow_any_host": true, 00:36:02.459 "hosts": [] 00:36:02.459 }, 00:36:02.459 { 00:36:02.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.459 "subtype": "NVMe", 00:36:02.459 "listen_addresses": [ 00:36:02.459 { 00:36:02.459 "trtype": "TCP", 00:36:02.459 "adrfam": "IPv4", 00:36:02.459 "traddr": "10.0.0.2", 00:36:02.459 "trsvcid": "4420" 00:36:02.459 } 00:36:02.459 ], 00:36:02.459 "allow_any_host": true, 00:36:02.459 "hosts": [], 00:36:02.459 "serial_number": "SPDK00000000000001", 00:36:02.459 "model_number": "SPDK bdev Controller", 00:36:02.459 "max_namespaces": 1, 00:36:02.459 "min_cntlid": 1, 00:36:02.459 "max_cntlid": 65519, 00:36:02.459 "namespaces": [ 00:36:02.459 { 00:36:02.459 "nsid": 1, 00:36:02.459 "bdev_name": "Nvme0n1", 00:36:02.459 "name": "Nvme0n1", 00:36:02.459 "nguid": "36344730526054870025384500000044", 00:36:02.459 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:02.459 } 00:36:02.459 ] 00:36:02.459 } 00:36:02.459 ] 00:36:02.459 11:16:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:02.459 11:16:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:02.720 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:02.720 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:02.720 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:02.720 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:02.980 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:02.980 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:02.980 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:02.980 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:02.980 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.980 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.980 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.980 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:02.980 11:16:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.980 rmmod nvme_tcp 00:36:02.980 rmmod nvme_fabrics 00:36:02.980 rmmod nvme_keyring 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 689081 ']' 00:36:02.980 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 689081 00:36:02.980 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 689081 ']' 00:36:02.981 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 689081 00:36:02.981 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:36:02.981 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:02.981 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 689081 00:36:03.242 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:03.242 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:03.242 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 689081' 00:36:03.242 killing process with pid 689081 00:36:03.242 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 689081 00:36:03.242 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 689081 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:03.503 11:16:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.503 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.503 11:16:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.414 11:16:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:05.414 00:36:05.414 real 0m13.184s 00:36:05.414 user 0m10.603s 00:36:05.414 sys 0m6.619s 00:36:05.414 11:16:24 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:05.414 11:16:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:05.414 ************************************ 00:36:05.414 END TEST nvmf_identify_passthru 00:36:05.414 ************************************ 00:36:05.674 11:16:24 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:05.674 11:16:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:05.674 11:16:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:05.674 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:36:05.674 ************************************ 00:36:05.674 START TEST nvmf_dif 00:36:05.674 ************************************ 00:36:05.674 11:16:25 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:05.674 * Looking for test storage... 00:36:05.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:05.674 11:16:25 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:05.674 11:16:25 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:05.674 11:16:25 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:05.675 11:16:25 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.675 11:16:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:05.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.936 --rc genhtml_branch_coverage=1 00:36:05.936 --rc genhtml_function_coverage=1 00:36:05.936 --rc genhtml_legend=1 00:36:05.936 --rc geninfo_all_blocks=1 00:36:05.936 --rc geninfo_unexecuted_blocks=1 00:36:05.936 00:36:05.936 ' 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:05.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.936 --rc genhtml_branch_coverage=1 00:36:05.936 --rc genhtml_function_coverage=1 00:36:05.936 --rc genhtml_legend=1 00:36:05.936 --rc geninfo_all_blocks=1 00:36:05.936 --rc geninfo_unexecuted_blocks=1 00:36:05.936 00:36:05.936 ' 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:05.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.936 --rc genhtml_branch_coverage=1 00:36:05.936 --rc genhtml_function_coverage=1 00:36:05.936 --rc genhtml_legend=1 00:36:05.936 --rc geninfo_all_blocks=1 00:36:05.936 --rc geninfo_unexecuted_blocks=1 00:36:05.936 00:36:05.936 ' 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:05.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.936 --rc genhtml_branch_coverage=1 00:36:05.936 --rc genhtml_function_coverage=1 00:36:05.936 --rc genhtml_legend=1 00:36:05.936 --rc geninfo_all_blocks=1 00:36:05.936 --rc geninfo_unexecuted_blocks=1 00:36:05.936 00:36:05.936 ' 00:36:05.936 11:16:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.936 11:16:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.936 11:16:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.936 11:16:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.936 11:16:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.936 11:16:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:05.936 11:16:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:05.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.936 11:16:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:05.936 11:16:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:05.936 11:16:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:05.936 11:16:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:05.936 11:16:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.936 11:16:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.936 11:16:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:05.937 11:16:25 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:05.937 11:16:25 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:05.937 11:16:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:14.073 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:14.073 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:14.073 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:14.073 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.073 11:16:32 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:36:14.073 00:36:14.073 --- 10.0.0.2 ping statistics --- 00:36:14.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.073 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:36:14.074 11:16:32 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:36:14.074 00:36:14.074 --- 10.0.0.1 ping statistics --- 00:36:14.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.074 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:36:14.074 11:16:32 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.074 11:16:32 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:14.074 11:16:32 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:14.074 11:16:32 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.617 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:16.617 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:16.617 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:16.878 11:16:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:16.878 11:16:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=695186 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 695186 00:36:16.878 11:16:36 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 695186 ']' 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:16.878 11:16:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.878 [2024-11-15 11:16:36.260171] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:36:16.878 [2024-11-15 11:16:36.260219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.878 [2024-11-15 11:16:36.351733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.878 [2024-11-15 11:16:36.387320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.878 [2024-11-15 11:16:36.387356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.878 [2024-11-15 11:16:36.387364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.878 [2024-11-15 11:16:36.387371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.878 [2024-11-15 11:16:36.387376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.878 [2024-11-15 11:16:36.387961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:17.821 11:16:37 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 11:16:37 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.821 11:16:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:17.821 11:16:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 [2024-11-15 11:16:37.085937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.821 11:16:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 ************************************ 00:36:17.821 START TEST fio_dif_1_default 00:36:17.821 ************************************ 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 bdev_null0 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.821 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.821 [2024-11-15 11:16:37.170316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:17.822 { 00:36:17.822 "params": { 00:36:17.822 "name": "Nvme$subsystem", 00:36:17.822 "trtype": "$TEST_TRANSPORT", 00:36:17.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.822 "adrfam": "ipv4", 00:36:17.822 "trsvcid": "$NVMF_PORT", 00:36:17.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.822 "hdgst": ${hdgst:-false}, 00:36:17.822 "ddgst": ${ddgst:-false} 00:36:17.822 }, 00:36:17.822 "method": "bdev_nvme_attach_controller" 00:36:17.822 } 00:36:17.822 EOF 00:36:17.822 )") 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:17.822 "params": { 00:36:17.822 "name": "Nvme0", 00:36:17.822 "trtype": "tcp", 00:36:17.822 "traddr": "10.0.0.2", 00:36:17.822 "adrfam": "ipv4", 00:36:17.822 "trsvcid": "4420", 00:36:17.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.822 "hdgst": false, 00:36:17.822 "ddgst": false 00:36:17.822 }, 00:36:17.822 "method": "bdev_nvme_attach_controller" 00:36:17.822 }' 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.822 11:16:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:18.084 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:18.084 fio-3.35 00:36:18.084 Starting 1 thread 00:36:30.347 00:36:30.347 filename0: (groupid=0, jobs=1): err= 0: pid=695752: Fri Nov 15 11:16:48 2024 00:36:30.347 read: IOPS=190, BW=763KiB/s (781kB/s)(7648KiB/10025msec) 00:36:30.347 slat (nsec): min=5484, max=35032, avg=6308.41, stdev=1459.48 00:36:30.347 clat (usec): min=480, max=43710, avg=20955.00, stdev=20335.74 00:36:30.347 lat (usec): min=488, max=43745, avg=20961.31, stdev=20335.75 00:36:30.347 clat percentiles (usec): 00:36:30.347 | 1.00th=[ 553], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 685], 00:36:30.347 | 30.00th=[ 709], 40.00th=[ 807], 50.00th=[ 996], 60.00th=[41157], 00:36:30.347 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:30.347 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:36:30.347 | 99.99th=[43779] 00:36:30.347 bw ( KiB/s): min= 672, max= 832, per=100.00%, avg=763.20, stdev=36.37, samples=20 00:36:30.347 iops : min= 168, max= 208, avg=190.80, stdev= 9.09, samples=20 00:36:30.347 lat (usec) : 500=0.16%, 750=37.50%, 1000=12.40% 00:36:30.347 lat (msec) : 2=0.16%, 50=49.79% 00:36:30.347 cpu : usr=93.07%, sys=6.72%, ctx=11, majf=0, minf=227 00:36:30.347 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.347 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.347 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:30.347 00:36:30.347 Run status group 0 (all jobs): 00:36:30.347 READ: bw=763KiB/s (781kB/s), 763KiB/s-763KiB/s (781kB/s-781kB/s), io=7648KiB (7832kB), run=10025-10025msec 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 00:36:30.347 real 0m11.314s 00:36:30.347 user 0m18.530s 00:36:30.347 sys 0m1.141s 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 ************************************ 00:36:30.347 END TEST fio_dif_1_default 00:36:30.347 ************************************ 00:36:30.347 11:16:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:30.347 11:16:48 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:30.347 11:16:48 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 ************************************ 00:36:30.347 START TEST fio_dif_1_multi_subsystems 00:36:30.347 ************************************ 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 bdev_null0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 [2024-11-15 11:16:48.564062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.347 bdev_null1 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:30.347 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.348 { 00:36:30.348 "params": { 00:36:30.348 "name": "Nvme$subsystem", 00:36:30.348 "trtype": "$TEST_TRANSPORT", 00:36:30.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.348 "adrfam": "ipv4", 00:36:30.348 "trsvcid": "$NVMF_PORT", 00:36:30.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.348 "hdgst": ${hdgst:-false}, 00:36:30.348 "ddgst": ${ddgst:-false} 00:36:30.348 }, 00:36:30.348 "method": "bdev_nvme_attach_controller" 00:36:30.348 } 00:36:30.348 EOF 00:36:30.348 )") 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.348 { 00:36:30.348 "params": { 00:36:30.348 "name": "Nvme$subsystem", 00:36:30.348 "trtype": "$TEST_TRANSPORT", 00:36:30.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.348 "adrfam": "ipv4", 00:36:30.348 "trsvcid": "$NVMF_PORT", 00:36:30.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.348 "hdgst": ${hdgst:-false}, 00:36:30.348 "ddgst": ${ddgst:-false} 00:36:30.348 }, 00:36:30.348 "method": "bdev_nvme_attach_controller" 00:36:30.348 } 00:36:30.348 EOF 00:36:30.348 )") 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:30.348 "params": { 00:36:30.348 "name": "Nvme0", 00:36:30.348 "trtype": "tcp", 00:36:30.348 "traddr": "10.0.0.2", 00:36:30.348 "adrfam": "ipv4", 00:36:30.348 "trsvcid": "4420", 00:36:30.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.348 "hdgst": false, 00:36:30.348 "ddgst": false 00:36:30.348 }, 00:36:30.348 "method": "bdev_nvme_attach_controller" 00:36:30.348 },{ 00:36:30.348 "params": { 00:36:30.348 "name": "Nvme1", 00:36:30.348 "trtype": "tcp", 00:36:30.348 "traddr": "10.0.0.2", 00:36:30.348 "adrfam": "ipv4", 00:36:30.348 "trsvcid": "4420", 00:36:30.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.348 "hdgst": false, 00:36:30.348 "ddgst": false 00:36:30.348 }, 00:36:30.348 "method": "bdev_nvme_attach_controller" 00:36:30.348 }' 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:30.348 11:16:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.348 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:30.348 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:30.348 fio-3.35 00:36:30.348 Starting 2 threads 00:36:42.584 00:36:42.584 filename0: (groupid=0, jobs=1): err= 0: pid=697998: Fri Nov 15 11:16:59 2024 00:36:42.584 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10037msec) 00:36:42.584 slat (nsec): min=5487, max=24638, avg=6331.90, stdev=1285.50 00:36:42.584 clat (usec): min=587, max=41873, avg=21068.87, stdev=20154.00 00:36:42.584 lat (usec): min=595, max=41897, avg=21075.20, stdev=20153.98 00:36:42.584 clat percentiles (usec): 00:36:42.584 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 840], 00:36:42.585 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:36:42.585 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:42.585 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:42.585 | 99.99th=[41681] 00:36:42.585 bw ( KiB/s): min= 672, max= 768, per=50.09%, avg=760.00, stdev=25.16, samples=20 00:36:42.585 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:36:42.585 lat (usec) : 750=3.99%, 1000=45.69% 00:36:42.585 lat (msec) : 2=0.11%, 50=50.21% 00:36:42.585 cpu : usr=95.37%, sys=4.43%, ctx=8, majf=0, minf=173 00:36:42.585 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:42.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.585 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.585 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:42.585 filename1: (groupid=0, jobs=1): err= 0: pid=697999: Fri Nov 15 11:16:59 2024 00:36:42.585 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10039msec) 00:36:42.585 slat (nsec): min=5487, max=27258, avg=6353.73, stdev=1255.27 00:36:42.585 clat (usec): min=551, max=42743, avg=21072.63, stdev=20157.73 00:36:42.585 lat (usec): min=556, max=42770, avg=21078.99, stdev=20157.72 00:36:42.585 clat percentiles (usec): 00:36:42.585 | 1.00th=[ 611], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:36:42.585 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:36:42.585 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:42.585 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:36:42.585 | 99.99th=[42730] 00:36:42.585 bw ( KiB/s): min= 672, max= 768, per=50.09%, avg=760.00, stdev=25.16, samples=20 00:36:42.585 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:36:42.585 lat (usec) : 750=2.99%, 1000=46.80% 00:36:42.585 lat (msec) : 50=50.21% 00:36:42.585 cpu : usr=95.86%, sys=3.95%, ctx=13, majf=0, minf=45 00:36:42.585 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:42.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.585 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.585 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:42.585 00:36:42.585 Run status group 0 (all jobs): 00:36:42.585 READ: bw=1517KiB/s (1554kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=14.9MiB (15.6MB), run=10037-10039msec 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.585 00:36:42.585 real 0m11.579s 00:36:42.585 user 0m36.573s 00:36:42.585 sys 0m1.167s 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:42.585 11:17:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:42.585 ************************************ 00:36:42.585 END TEST fio_dif_1_multi_subsystems 00:36:42.585 ************************************ 00:36:42.585 11:17:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:42.585 11:17:00 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:42.585 11:17:00 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:42.585 11:17:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:42.585 ************************************ 00:36:42.585 START TEST fio_dif_rand_params 00:36:42.585 ************************************ 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:42.585 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.586 bdev_null0 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.586 [2024-11-15 11:17:00.222772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:42.586 { 00:36:42.586 "params": { 00:36:42.586 "name": "Nvme$subsystem", 00:36:42.586 "trtype": "$TEST_TRANSPORT", 00:36:42.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:42.586 "adrfam": "ipv4", 00:36:42.586 "trsvcid": "$NVMF_PORT", 00:36:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:42.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:42.586 "hdgst": ${hdgst:-false}, 00:36:42.586 "ddgst": ${ddgst:-false} 00:36:42.586 }, 00:36:42.586 "method": "bdev_nvme_attach_controller" 00:36:42.586 } 00:36:42.586 EOF 00:36:42.586 )") 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:42.586 "params": { 00:36:42.586 "name": "Nvme0", 00:36:42.586 "trtype": "tcp", 00:36:42.586 "traddr": "10.0.0.2", 00:36:42.586 "adrfam": "ipv4", 00:36:42.586 "trsvcid": "4420", 00:36:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:42.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:42.586 "hdgst": false, 00:36:42.586 "ddgst": false 00:36:42.586 }, 00:36:42.586 "method": "bdev_nvme_attach_controller" 00:36:42.586 }' 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:42.586 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:42.587 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:42.587 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:42.587 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:42.587 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:42.587 11:17:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.587 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:42.587 ... 00:36:42.587 fio-3.35 00:36:42.587 Starting 3 threads 00:36:46.787 00:36:46.787 filename0: (groupid=0, jobs=1): err= 0: pid=700269: Fri Nov 15 11:17:06 2024 00:36:46.787 read: IOPS=306, BW=38.4MiB/s (40.2MB/s)(194MiB/5045msec) 00:36:46.787 slat (nsec): min=5500, max=31093, avg=8077.44, stdev=1596.06 00:36:46.787 clat (usec): min=3773, max=90381, avg=9738.67, stdev=10548.49 00:36:46.787 lat (usec): min=3781, max=90389, avg=9746.74, stdev=10548.47 00:36:46.787 clat percentiles (usec): 00:36:46.787 | 1.00th=[ 4424], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 5866], 00:36:46.787 | 30.00th=[ 6325], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 7963], 00:36:46.787 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[45351], 00:36:46.787 | 99.00th=[49021], 99.50th=[86508], 99.90th=[90702], 99.95th=[90702], 00:36:46.787 | 99.99th=[90702] 00:36:46.787 bw ( KiB/s): min=29440, max=50944, per=36.03%, avg=39571.30, stdev=6934.88, samples=10 00:36:46.787 iops : min= 230, max= 398, avg=309.10, stdev=54.25, samples=10 00:36:46.787 lat (msec) : 4=0.06%, 10=89.47%, 20=5.10%, 50=4.52%, 100=0.84% 00:36:46.787 cpu : usr=94.19%, sys=5.59%, ctx=5, majf=0, minf=80 00:36:46.787 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.787 issued rwts: total=1548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:46.787 filename0: (groupid=0, jobs=1): err= 0: pid=700270: Fri Nov 15 11:17:06 2024 00:36:46.787 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5047msec) 00:36:46.787 slat (nsec): min=5495, max=30747, avg=6285.81, stdev=1085.23 00:36:46.787 clat (usec): min=4263, max=86292, avg=9625.58, stdev=6180.04 00:36:46.787 lat (usec): min=4269, max=86298, avg=9631.86, stdev=6180.26 00:36:46.787 clat percentiles (usec): 00:36:46.787 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 6783], 00:36:46.787 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:36:46.787 | 70.00th=[10159], 80.00th=[11207], 90.00th=[11994], 95.00th=[12780], 00:36:46.787 | 99.00th=[47449], 99.50th=[50070], 99.90th=[55837], 99.95th=[86508], 00:36:46.787 | 99.99th=[86508] 00:36:46.787 bw ( KiB/s): min=27392, max=45568, per=36.47%, avg=40064.00, stdev=5196.58, samples=10 00:36:46.787 iops : min= 214, max= 356, avg=313.00, stdev=40.60, samples=10 00:36:46.787 lat (msec) : 10=68.86%, 20=29.16%, 50=1.47%, 100=0.51% 00:36:46.787 cpu : usr=94.51%, sys=5.25%, ctx=9, majf=0, minf=114 00:36:46.787 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.787 issued rwts: total=1567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:46.787 filename0: (groupid=0, jobs=1): err= 0: pid=700272: Fri Nov 15 11:17:06 2024 00:36:46.787 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(152MiB/5042msec) 00:36:46.787 slat (nsec): min=5489, max=31942, avg=6408.06, stdev=1622.56 00:36:46.787 clat (usec): min=4020, max=88907, avg=12430.64, stdev=14940.69 00:36:46.787 lat (usec): min=4026, max=88913, avg=12437.04, stdev=14940.65 00:36:46.787 clat percentiles (usec): 00:36:46.787 | 1.00th=[ 4555], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 6194], 00:36:46.787 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7767], 00:36:46.787 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[46400], 95.00th=[47973], 00:36:46.787 | 99.00th=[86508], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:36:46.787 | 99.99th=[88605] 00:36:46.787 bw ( KiB/s): min=18432, max=44288, per=28.27%, avg=31052.80, stdev=8359.23, samples=10 00:36:46.787 iops : min= 144, max= 346, avg=242.60, stdev=65.31, samples=10 00:36:46.787 lat (msec) : 10=87.66%, 20=0.33%, 50=10.36%, 100=1.64% 00:36:46.787 cpu : usr=96.11%, sys=3.67%, ctx=13, majf=0, minf=78 00:36:46.787 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.787 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:46.787 00:36:46.787 Run status group 0 (all jobs): 00:36:46.787 READ: bw=107MiB/s (112MB/s), 30.1MiB/s-38.8MiB/s (31.6MB/s-40.7MB/s), io=541MiB (568MB), run=5042-5047msec 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.787 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 bdev_null0 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 [2024-11-15 11:17:06.373553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 bdev_null1 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 bdev_null2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.049 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.050 { 00:36:47.050 "params": { 00:36:47.050 "name": "Nvme$subsystem", 00:36:47.050 "trtype": "$TEST_TRANSPORT", 00:36:47.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.050 "adrfam": "ipv4", 00:36:47.050 "trsvcid": "$NVMF_PORT", 00:36:47.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.050 "hdgst": ${hdgst:-false}, 00:36:47.050 "ddgst": ${ddgst:-false} 00:36:47.050 }, 00:36:47.050 "method": "bdev_nvme_attach_controller" 00:36:47.050 } 00:36:47.050 EOF 00:36:47.050 )") 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.050 { 00:36:47.050 "params": { 00:36:47.050 "name": "Nvme$subsystem", 00:36:47.050 "trtype": "$TEST_TRANSPORT", 00:36:47.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.050 "adrfam": "ipv4", 00:36:47.050 "trsvcid": "$NVMF_PORT", 00:36:47.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.050 "hdgst": ${hdgst:-false}, 00:36:47.050 "ddgst": ${ddgst:-false} 00:36:47.050 }, 00:36:47.050 "method": "bdev_nvme_attach_controller" 00:36:47.050 } 00:36:47.050 EOF 00:36:47.050 )") 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.050 { 00:36:47.050 "params": { 00:36:47.050 "name": "Nvme$subsystem", 00:36:47.050 "trtype": "$TEST_TRANSPORT", 00:36:47.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.050 "adrfam": "ipv4", 00:36:47.050 "trsvcid": "$NVMF_PORT", 00:36:47.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.050 "hdgst": ${hdgst:-false}, 00:36:47.050 "ddgst": ${ddgst:-false} 00:36:47.050 }, 00:36:47.050 "method": "bdev_nvme_attach_controller" 00:36:47.050 } 00:36:47.050 EOF 00:36:47.050 )") 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:47.050 "params": { 00:36:47.050 "name": "Nvme0", 00:36:47.050 "trtype": "tcp", 00:36:47.050 "traddr": "10.0.0.2", 00:36:47.050 "adrfam": "ipv4", 00:36:47.050 "trsvcid": "4420", 00:36:47.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.050 "hdgst": false, 00:36:47.050 "ddgst": false 00:36:47.050 }, 00:36:47.050 "method": "bdev_nvme_attach_controller" 00:36:47.050 },{ 00:36:47.050 "params": { 00:36:47.050 "name": "Nvme1", 00:36:47.050 "trtype": "tcp", 00:36:47.050 "traddr": "10.0.0.2", 00:36:47.050 "adrfam": "ipv4", 00:36:47.050 "trsvcid": "4420", 00:36:47.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:47.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:47.050 "hdgst": false, 00:36:47.050 "ddgst": false 00:36:47.050 }, 00:36:47.050 "method": "bdev_nvme_attach_controller" 00:36:47.050 },{ 00:36:47.050 "params": { 00:36:47.050 "name": "Nvme2", 00:36:47.050 "trtype": "tcp", 00:36:47.050 "traddr": "10.0.0.2", 00:36:47.050 "adrfam": "ipv4", 00:36:47.050 "trsvcid": "4420", 00:36:47.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:47.050 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:47.050 "hdgst": false, 00:36:47.050 "ddgst": false 00:36:47.050 }, 00:36:47.050 "method": "bdev_nvme_attach_controller" 00:36:47.050 }' 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:47.050 11:17:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.648 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:47.648 ... 00:36:47.648 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:47.648 ... 00:36:47.648 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:47.648 ... 00:36:47.648 fio-3.35 00:36:47.648 Starting 24 threads 00:37:00.084 00:37:00.084 filename0: (groupid=0, jobs=1): err= 0: pid=701712: Fri Nov 15 11:17:17 2024 00:37:00.084 read: IOPS=690, BW=2763KiB/s (2829kB/s)(27.0MiB/10008msec) 00:37:00.084 slat (nsec): min=5653, max=73731, avg=9047.78, stdev=6089.86 00:37:00.084 clat (usec): min=1202, max=43267, avg=23089.64, stdev=4045.95 00:37:00.084 lat (usec): min=1220, max=43278, avg=23098.69, stdev=4044.66 00:37:00.084 clat percentiles (usec): 00:37:00.084 | 1.00th=[ 1565], 5.00th=[18744], 10.00th=[23200], 20.00th=[23725], 00:37:00.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:00.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.084 | 99.00th=[25035], 99.50th=[25297], 99.90th=[30540], 99.95th=[30802], 00:37:00.084 | 99.99th=[43254] 00:37:00.084 bw ( KiB/s): min= 2554, max= 4352, per=4.31%, avg=2767.89, stdev=389.37, samples=19 00:37:00.084 iops : min= 638, max= 1088, avg=691.89, stdev=97.38, samples=19 00:37:00.084 lat (msec) : 2=2.31%, 4=0.23%, 10=1.00%, 20=1.55%, 50=94.91% 00:37:00.084 cpu : usr=99.05%, sys=0.63%, ctx=25, majf=0, minf=28 00:37:00.084 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:00.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 issued rwts: total=6912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.084 filename0: (groupid=0, jobs=1): err= 0: pid=701713: Fri Nov 15 11:17:17 2024 00:37:00.084 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.1MiB/10045msec) 00:37:00.084 slat (usec): min=5, max=103, avg=17.99, stdev=14.60 00:37:00.084 clat (usec): min=15429, max=56448, avg=23828.49, stdev=828.29 00:37:00.084 lat (usec): min=15435, max=56454, avg=23846.48, stdev=827.55 00:37:00.084 clat percentiles (usec): 00:37:00.084 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24773], 00:37:00.084 | 99.00th=[25297], 99.50th=[25560], 99.90th=[27657], 99.95th=[30278], 00:37:00.084 | 99.99th=[56361] 00:37:00.084 bw ( KiB/s): min= 2560, max= 2796, per=4.16%, avg=2667.80, stdev=58.69, samples=20 00:37:00.084 iops : min= 640, max= 699, avg=666.95, stdev=14.67, samples=20 00:37:00.084 lat (msec) : 20=0.67%, 50=99.31%, 100=0.01% 00:37:00.084 cpu : usr=99.06%, sys=0.65%, ctx=23, majf=0, minf=15 00:37:00.084 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.084 filename0: (groupid=0, jobs=1): err= 0: pid=701714: Fri Nov 15 11:17:17 2024 00:37:00.084 read: IOPS=674, BW=2697KiB/s (2762kB/s)(26.4MiB/10007msec) 00:37:00.084 slat (nsec): min=5533, max=84835, avg=13911.35, stdev=12800.01 00:37:00.084 clat (usec): min=7985, max=32695, avg=23614.64, stdev=1949.71 00:37:00.084 lat (usec): min=8006, max=32724, avg=23628.55, stdev=1948.97 00:37:00.084 clat percentiles (usec): 00:37:00.084 | 1.00th=[12125], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:37:00.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:00.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:37:00.084 | 99.00th=[25035], 99.50th=[25297], 99.90th=[31327], 99.95th=[32113], 00:37:00.084 | 99.99th=[32637] 00:37:00.084 bw ( KiB/s): min= 2554, max= 3168, per=4.20%, avg=2698.53, stdev=128.13, samples=19 00:37:00.084 iops : min= 638, max= 792, avg=674.53, stdev=32.08, samples=19 00:37:00.084 lat (msec) : 10=0.56%, 20=2.49%, 50=96.95% 00:37:00.084 cpu : usr=98.81%, sys=0.80%, ctx=36, majf=0, minf=17 00:37:00.084 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:00.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 issued rwts: total=6748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.084 filename0: (groupid=0, jobs=1): err= 0: pid=701715: Fri Nov 15 11:17:17 2024 00:37:00.084 read: IOPS=672, BW=2689KiB/s (2753kB/s)(26.3MiB/10015msec) 00:37:00.084 slat (nsec): min=5546, max=84409, avg=11742.27, stdev=10028.18 00:37:00.084 clat (usec): min=5803, max=38394, avg=23706.41, stdev=2455.81 00:37:00.084 lat (usec): min=5821, max=38413, avg=23718.16, stdev=2455.63 00:37:00.084 clat percentiles (usec): 00:37:00.084 | 1.00th=[11469], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:37:00.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:00.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.084 | 99.00th=[31589], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:37:00.084 | 99.99th=[38536] 00:37:00.084 bw ( KiB/s): min= 2560, max= 2992, per=4.18%, avg=2685.20, stdev=100.35, samples=20 00:37:00.084 iops : min= 640, max= 748, avg=671.20, stdev=25.10, samples=20 00:37:00.084 lat (msec) : 10=0.71%, 20=3.03%, 50=96.26% 00:37:00.084 cpu : usr=98.81%, sys=0.86%, ctx=44, majf=0, minf=16 00:37:00.084 IO depths : 1=5.5%, 2=11.3%, 4=23.9%, 8=52.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:00.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 issued rwts: total=6732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.084 filename0: (groupid=0, jobs=1): err= 0: pid=701716: Fri Nov 15 11:17:17 2024 00:37:00.084 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10007msec) 00:37:00.084 slat (nsec): min=4631, max=95494, avg=24705.54, stdev=16787.07 00:37:00.084 clat (usec): min=11906, max=41102, avg=24114.29, stdev=2073.92 00:37:00.084 lat (usec): min=11919, max=41109, avg=24139.00, stdev=2073.43 00:37:00.084 clat percentiles (usec): 00:37:00.084 | 1.00th=[16581], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.084 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[27132], 00:37:00.084 | 99.00th=[33162], 99.50th=[34866], 99.90th=[40633], 99.95th=[41157], 00:37:00.084 | 99.99th=[41157] 00:37:00.084 bw ( KiB/s): min= 2432, max= 2704, per=4.08%, avg=2620.05, stdev=89.00, samples=19 00:37:00.084 iops : min= 608, max= 676, avg=654.95, stdev=22.35, samples=19 00:37:00.084 lat (msec) : 20=1.22%, 50=98.78% 00:37:00.084 cpu : usr=98.83%, sys=0.84%, ctx=28, majf=0, minf=28 00:37:00.084 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:00.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.084 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.084 filename0: (groupid=0, jobs=1): err= 0: pid=701717: Fri Nov 15 11:17:17 2024 00:37:00.084 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10003msec) 00:37:00.084 slat (nsec): min=5597, max=98000, avg=22499.98, stdev=15543.98 00:37:00.084 clat (usec): min=4877, max=45310, avg=23898.26, stdev=2965.01 00:37:00.084 lat (usec): min=4883, max=45330, avg=23920.76, stdev=2965.59 00:37:00.084 clat percentiles (usec): 00:37:00.084 | 1.00th=[15139], 5.00th=[20317], 10.00th=[23200], 20.00th=[23462], 00:37:00.084 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.084 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[28181], 00:37:00.084 | 99.00th=[34866], 99.50th=[36963], 99.90th=[45351], 99.95th=[45351], 00:37:00.084 | 99.99th=[45351] 00:37:00.084 bw ( KiB/s): min= 2432, max= 2736, per=4.13%, avg=2648.32, stdev=74.10, samples=19 00:37:00.084 iops : min= 608, max= 684, avg=662.00, stdev=18.48, samples=19 00:37:00.085 lat (msec) : 10=0.36%, 20=4.50%, 50=95.14% 00:37:00.085 cpu : usr=99.01%, sys=0.68%, ctx=11, majf=0, minf=18 00:37:00.085 IO depths : 1=4.1%, 2=8.9%, 4=20.4%, 8=57.6%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename0: (groupid=0, jobs=1): err= 0: pid=701718: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10007msec) 00:37:00.085 slat (usec): min=5, max=119, avg=18.86, stdev=14.79 00:37:00.085 clat (usec): min=7116, max=25700, avg=23664.27, stdev=1669.57 00:37:00.085 lat (usec): min=7135, max=25712, avg=23683.13, stdev=1668.55 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[13173], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:00.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:37:00.085 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:37:00.085 | 99.99th=[25822] 00:37:00.085 bw ( KiB/s): min= 2554, max= 2944, per=4.19%, avg=2686.74, stdev=86.36, samples=19 00:37:00.085 iops : min= 638, max= 736, avg=671.58, stdev=21.68, samples=19 00:37:00.085 lat (msec) : 10=0.70%, 20=1.21%, 50=98.10% 00:37:00.085 cpu : usr=99.01%, sys=0.66%, ctx=48, majf=0, minf=21 00:37:00.085 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename0: (groupid=0, jobs=1): err= 0: pid=701719: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=673, BW=2694KiB/s (2758kB/s)(26.3MiB/10011msec) 00:37:00.085 slat (usec): min=5, max=104, avg=20.13, stdev=14.61 00:37:00.085 clat (usec): min=10029, max=38838, avg=23589.59, stdev=2496.05 00:37:00.085 lat (usec): min=10040, max=38849, avg=23609.72, stdev=2497.35 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[14877], 5.00th=[18482], 10.00th=[22938], 20.00th=[23462], 00:37:00.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:37:00.085 | 99.00th=[32900], 99.50th=[35390], 99.90th=[38536], 99.95th=[39060], 00:37:00.085 | 99.99th=[39060] 00:37:00.085 bw ( KiB/s): min= 2560, max= 2880, per=4.19%, avg=2689.80, stdev=87.39, samples=20 00:37:00.085 iops : min= 640, max= 720, avg=672.40, stdev=21.80, samples=20 00:37:00.085 lat (msec) : 20=6.64%, 50=93.36% 00:37:00.085 cpu : usr=98.47%, sys=0.95%, ctx=139, majf=0, minf=27 00:37:00.085 IO depths : 1=4.8%, 2=9.7%, 4=21.1%, 8=56.4%, 16=8.0%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename1: (groupid=0, jobs=1): err= 0: pid=701720: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10003msec) 00:37:00.085 slat (nsec): min=5529, max=95099, avg=21589.77, stdev=14809.65 00:37:00.085 clat (usec): min=6539, max=45280, avg=23704.18, stdev=1961.53 00:37:00.085 lat (usec): min=6545, max=45298, avg=23725.77, stdev=1962.11 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[16188], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:00.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.085 | 99.00th=[28443], 99.50th=[31065], 99.90th=[45351], 99.95th=[45351], 00:37:00.085 | 99.99th=[45351] 00:37:00.085 bw ( KiB/s): min= 2432, max= 2826, per=4.16%, avg=2670.21, stdev=79.48, samples=19 00:37:00.085 iops : min= 608, max= 706, avg=667.47, stdev=19.81, samples=19 00:37:00.085 lat (msec) : 10=0.21%, 20=3.02%, 50=96.77% 00:37:00.085 cpu : usr=99.03%, sys=0.66%, ctx=33, majf=0, minf=19 00:37:00.085 IO depths : 1=5.5%, 2=11.3%, 4=23.7%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename1: (groupid=0, jobs=1): err= 0: pid=701721: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=672, BW=2689KiB/s (2753kB/s)(26.3MiB/10007msec) 00:37:00.085 slat (usec): min=5, max=106, avg=19.27, stdev=15.65 00:37:00.085 clat (usec): min=7966, max=34193, avg=23638.63, stdev=1962.40 00:37:00.085 lat (usec): min=7983, max=34215, avg=23657.90, stdev=1962.17 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[13435], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:00.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.085 | 99.00th=[25297], 99.50th=[31851], 99.90th=[33162], 99.95th=[33817], 00:37:00.085 | 99.99th=[34341] 00:37:00.085 bw ( KiB/s): min= 2554, max= 2997, per=4.19%, avg=2689.53, stdev=95.06, samples=19 00:37:00.085 iops : min= 638, max= 749, avg=672.26, stdev=23.77, samples=19 00:37:00.085 lat (msec) : 10=0.55%, 20=2.57%, 50=96.88% 00:37:00.085 cpu : usr=98.82%, sys=0.77%, ctx=96, majf=0, minf=36 00:37:00.085 IO depths : 1=5.2%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename1: (groupid=0, jobs=1): err= 0: pid=701722: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10020msec) 00:37:00.085 slat (nsec): min=5655, max=93970, avg=22342.98, stdev=15441.98 00:37:00.085 clat (usec): min=5463, max=41939, avg=23519.61, stdev=2119.11 00:37:00.085 lat (usec): min=5476, max=41945, avg=23541.96, stdev=2120.00 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[13698], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:00.085 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.085 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:37:00.085 | 99.00th=[25560], 99.50th=[30540], 99.90th=[41681], 99.95th=[41681], 00:37:00.085 | 99.99th=[41681] 00:37:00.085 bw ( KiB/s): min= 2554, max= 2864, per=4.20%, avg=2695.85, stdev=83.04, samples=20 00:37:00.085 iops : min= 638, max= 716, avg=673.85, stdev=20.80, samples=20 00:37:00.085 lat (msec) : 10=0.68%, 20=3.21%, 50=96.11% 00:37:00.085 cpu : usr=98.77%, sys=0.83%, ctx=81, majf=0, minf=21 00:37:00.085 IO depths : 1=5.8%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename1: (groupid=0, jobs=1): err= 0: pid=701723: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=667, BW=2669KiB/s (2733kB/s)(26.1MiB/10013msec) 00:37:00.085 slat (usec): min=5, max=107, avg=20.41, stdev=14.91 00:37:00.085 clat (usec): min=13820, max=31220, avg=23782.07, stdev=865.99 00:37:00.085 lat (usec): min=13835, max=31245, avg=23802.48, stdev=865.81 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[19792], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.085 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.085 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24773], 00:37:00.085 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26346], 99.95th=[30802], 00:37:00.085 | 99.99th=[31327] 00:37:00.085 bw ( KiB/s): min= 2560, max= 2693, per=4.16%, avg=2668.75, stdev=46.91, samples=20 00:37:00.085 iops : min= 640, max= 673, avg=667.15, stdev=11.71, samples=20 00:37:00.085 lat (msec) : 20=1.11%, 50=98.89% 00:37:00.085 cpu : usr=98.33%, sys=1.03%, ctx=201, majf=0, minf=22 00:37:00.085 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename1: (groupid=0, jobs=1): err= 0: pid=701724: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=672, BW=2689KiB/s (2753kB/s)(26.3MiB/10007msec) 00:37:00.085 slat (usec): min=5, max=106, avg=23.86, stdev=15.32 00:37:00.085 clat (usec): min=5527, max=31339, avg=23578.06, stdev=1736.24 00:37:00.085 lat (usec): min=5540, max=31399, avg=23601.92, stdev=1737.33 00:37:00.085 clat percentiles (usec): 00:37:00.085 | 1.00th=[13698], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:00.085 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.085 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:00.085 | 99.00th=[25297], 99.50th=[27657], 99.90th=[31065], 99.95th=[31327], 00:37:00.085 | 99.99th=[31327] 00:37:00.085 bw ( KiB/s): min= 2554, max= 2992, per=4.19%, avg=2689.53, stdev=112.47, samples=19 00:37:00.085 iops : min= 638, max= 748, avg=672.26, stdev=28.13, samples=19 00:37:00.085 lat (msec) : 10=0.34%, 20=2.53%, 50=97.13% 00:37:00.085 cpu : usr=98.59%, sys=0.91%, ctx=126, majf=0, minf=25 00:37:00.085 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:00.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.085 issued rwts: total=6726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.085 filename1: (groupid=0, jobs=1): err= 0: pid=701725: Fri Nov 15 11:17:17 2024 00:37:00.085 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.2MiB/10011msec) 00:37:00.085 slat (usec): min=5, max=100, avg=19.58, stdev=15.22 00:37:00.085 clat (usec): min=11601, max=40647, avg=23696.43, stdev=1949.07 00:37:00.086 lat (usec): min=11607, max=40656, avg=23716.01, stdev=1950.14 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[15008], 5.00th=[20579], 10.00th=[23200], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.086 | 99.00th=[30278], 99.50th=[32113], 99.90th=[38011], 99.95th=[38011], 00:37:00.086 | 99.99th=[40633] 00:37:00.086 bw ( KiB/s): min= 2560, max= 2952, per=4.18%, avg=2681.60, stdev=91.01, samples=20 00:37:00.086 iops : min= 640, max= 738, avg=670.40, stdev=22.75, samples=20 00:37:00.086 lat (msec) : 20=4.45%, 50=95.55% 00:37:00.086 cpu : usr=98.95%, sys=0.75%, ctx=15, majf=0, minf=28 00:37:00.086 IO depths : 1=4.5%, 2=9.4%, 4=20.3%, 8=57.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=92.4%, 8=2.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename1: (groupid=0, jobs=1): err= 0: pid=701726: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10004msec) 00:37:00.086 slat (nsec): min=5545, max=92007, avg=21697.35, stdev=15843.37 00:37:00.086 clat (usec): min=4860, max=46015, avg=23786.67, stdev=1836.22 00:37:00.086 lat (usec): min=4867, max=46030, avg=23808.36, stdev=1836.05 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[16057], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.086 | 99.00th=[25297], 99.50th=[31589], 99.90th=[45876], 99.95th=[45876], 00:37:00.086 | 99.99th=[45876] 00:37:00.086 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2653.37, stdev=70.09, samples=19 00:37:00.086 iops : min= 608, max= 672, avg=663.26, stdev=17.49, samples=19 00:37:00.086 lat (msec) : 10=0.24%, 20=1.08%, 50=98.68% 00:37:00.086 cpu : usr=98.83%, sys=0.76%, ctx=103, majf=0, minf=23 00:37:00.086 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename1: (groupid=0, jobs=1): err= 0: pid=701727: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=659, BW=2639KiB/s (2703kB/s)(25.8MiB/10003msec) 00:37:00.086 slat (nsec): min=5650, max=91004, avg=19180.86, stdev=14375.20 00:37:00.086 clat (usec): min=6475, max=45453, avg=24115.24, stdev=3315.35 00:37:00.086 lat (usec): min=6481, max=45469, avg=24134.42, stdev=3315.57 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[15401], 5.00th=[19268], 10.00th=[20841], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27132], 95.00th=[30016], 00:37:00.086 | 99.00th=[35914], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:37:00.086 | 99.99th=[45351] 00:37:00.086 bw ( KiB/s): min= 2432, max= 2720, per=4.09%, avg=2626.42, stdev=72.69, samples=19 00:37:00.086 iops : min= 608, max= 680, avg=656.53, stdev=18.21, samples=19 00:37:00.086 lat (msec) : 10=0.24%, 20=7.08%, 50=92.68% 00:37:00.086 cpu : usr=99.12%, sys=0.57%, ctx=14, majf=0, minf=20 00:37:00.086 IO depths : 1=2.5%, 2=5.1%, 4=12.3%, 8=68.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=90.8%, 8=5.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename2: (groupid=0, jobs=1): err= 0: pid=701728: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=678, BW=2715KiB/s (2780kB/s)(26.6MiB/10019msec) 00:37:00.086 slat (nsec): min=5514, max=65923, avg=16763.54, stdev=10639.57 00:37:00.086 clat (usec): min=13079, max=39926, avg=23426.10, stdev=2777.40 00:37:00.086 lat (usec): min=13087, max=39960, avg=23442.86, stdev=2779.72 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[14353], 5.00th=[16909], 10.00th=[20579], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.086 | 99.00th=[34866], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:37:00.086 | 99.99th=[40109] 00:37:00.086 bw ( KiB/s): min= 2560, max= 3040, per=4.20%, avg=2692.47, stdev=101.72, samples=19 00:37:00.086 iops : min= 640, max= 760, avg=673.11, stdev=25.44, samples=19 00:37:00.086 lat (msec) : 20=9.28%, 50=90.72% 00:37:00.086 cpu : usr=98.75%, sys=0.87%, ctx=93, majf=0, minf=20 00:37:00.086 IO depths : 1=4.4%, 2=9.2%, 4=21.0%, 8=57.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename2: (groupid=0, jobs=1): err= 0: pid=701729: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=678, BW=2713KiB/s (2778kB/s)(26.5MiB/10003msec) 00:37:00.086 slat (nsec): min=5523, max=98357, avg=17912.73, stdev=14712.59 00:37:00.086 clat (usec): min=6374, max=45650, avg=23487.93, stdev=3132.17 00:37:00.086 lat (usec): min=6380, max=45669, avg=23505.84, stdev=3133.47 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[13435], 5.00th=[17695], 10.00th=[20055], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[27657], 00:37:00.086 | 99.00th=[33162], 99.50th=[35914], 99.90th=[45351], 99.95th=[45876], 00:37:00.086 | 99.99th=[45876] 00:37:00.086 bw ( KiB/s): min= 2432, max= 2848, per=4.21%, avg=2701.37, stdev=90.01, samples=19 00:37:00.086 iops : min= 608, max= 712, avg=675.26, stdev=22.49, samples=19 00:37:00.086 lat (msec) : 10=0.18%, 20=9.70%, 50=90.12% 00:37:00.086 cpu : usr=98.91%, sys=0.78%, ctx=13, majf=0, minf=27 00:37:00.086 IO depths : 1=1.2%, 2=2.6%, 4=7.1%, 8=74.4%, 16=14.7%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=90.3%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename2: (groupid=0, jobs=1): err= 0: pid=701730: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=670, BW=2680KiB/s (2745kB/s)(26.2MiB/10020msec) 00:37:00.086 slat (usec): min=5, max=118, avg=17.85, stdev=11.78 00:37:00.086 clat (usec): min=8073, max=37802, avg=23715.57, stdev=1700.32 00:37:00.086 lat (usec): min=8091, max=37812, avg=23733.43, stdev=1699.47 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[14091], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.086 | 99.00th=[25560], 99.50th=[26870], 99.90th=[37487], 99.95th=[38011], 00:37:00.086 | 99.99th=[38011] 00:37:00.086 bw ( KiB/s): min= 2554, max= 2944, per=4.17%, avg=2678.00, stdev=78.36, samples=20 00:37:00.086 iops : min= 638, max= 736, avg=669.40, stdev=19.63, samples=20 00:37:00.086 lat (msec) : 10=0.61%, 20=1.30%, 50=98.09% 00:37:00.086 cpu : usr=98.86%, sys=0.82%, ctx=32, majf=0, minf=28 00:37:00.086 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename2: (groupid=0, jobs=1): err= 0: pid=701731: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=669, BW=2676KiB/s (2740kB/s)(26.2MiB/10008msec) 00:37:00.086 slat (usec): min=5, max=107, avg=12.53, stdev=10.44 00:37:00.086 clat (usec): min=9362, max=48515, avg=23850.21, stdev=3415.82 00:37:00.086 lat (usec): min=9404, max=48532, avg=23862.73, stdev=3416.36 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[14746], 5.00th=[18220], 10.00th=[19792], 20.00th=[21627], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:00.086 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27919], 95.00th=[29754], 00:37:00.086 | 99.00th=[33817], 99.50th=[36439], 99.90th=[39584], 99.95th=[40633], 00:37:00.086 | 99.99th=[48497] 00:37:00.086 bw ( KiB/s): min= 2491, max= 2800, per=4.19%, avg=2686.26, stdev=82.53, samples=19 00:37:00.086 iops : min= 622, max= 700, avg=671.47, stdev=20.75, samples=19 00:37:00.086 lat (msec) : 10=0.06%, 20=12.38%, 50=87.56% 00:37:00.086 cpu : usr=98.77%, sys=0.76%, ctx=129, majf=0, minf=25 00:37:00.086 IO depths : 1=0.1%, 2=0.4%, 4=4.4%, 8=79.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:37:00.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 complete : 0=0.0%, 4=89.6%, 8=8.2%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.086 issued rwts: total=6696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.086 filename2: (groupid=0, jobs=1): err= 0: pid=701732: Fri Nov 15 11:17:17 2024 00:37:00.086 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10007msec) 00:37:00.086 slat (nsec): min=5672, max=90822, avg=18878.61, stdev=14086.93 00:37:00.086 clat (usec): min=11772, max=34608, avg=23839.82, stdev=1218.27 00:37:00.086 lat (usec): min=11778, max=34622, avg=23858.70, stdev=1217.92 00:37:00.086 clat percentiles (usec): 00:37:00.086 | 1.00th=[21627], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:37:00.086 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.086 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:00.086 | 99.00th=[25297], 99.50th=[31589], 99.90th=[33162], 99.95th=[34341], 00:37:00.086 | 99.99th=[34866] 00:37:00.086 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2660.42, stdev=54.11, samples=19 00:37:00.086 iops : min= 638, max= 672, avg=665.05, stdev=13.57, samples=19 00:37:00.086 lat (msec) : 20=0.99%, 50=99.01% 00:37:00.086 cpu : usr=98.95%, sys=0.77%, ctx=14, majf=0, minf=23 00:37:00.086 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:00.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.087 filename2: (groupid=0, jobs=1): err= 0: pid=701733: Fri Nov 15 11:17:17 2024 00:37:00.087 read: IOPS=673, BW=2693KiB/s (2757kB/s)(26.3MiB/10010msec) 00:37:00.087 slat (nsec): min=5665, max=94220, avg=21775.86, stdev=15077.17 00:37:00.087 clat (usec): min=11416, max=40428, avg=23575.85, stdev=1971.70 00:37:00.087 lat (usec): min=11438, max=40455, avg=23597.63, stdev=1973.43 00:37:00.087 clat percentiles (usec): 00:37:00.087 | 1.00th=[15533], 5.00th=[19530], 10.00th=[23200], 20.00th=[23462], 00:37:00.087 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.087 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:00.087 | 99.00th=[29492], 99.50th=[33424], 99.90th=[40109], 99.95th=[40633], 00:37:00.087 | 99.99th=[40633] 00:37:00.087 bw ( KiB/s): min= 2560, max= 3056, per=4.19%, avg=2688.80, stdev=106.57, samples=20 00:37:00.087 iops : min= 640, max= 764, avg=672.20, stdev=26.64, samples=20 00:37:00.087 lat (msec) : 20=5.28%, 50=94.72% 00:37:00.087 cpu : usr=98.48%, sys=1.03%, ctx=174, majf=0, minf=18 00:37:00.087 IO depths : 1=5.5%, 2=11.1%, 4=23.0%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:00.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.087 filename2: (groupid=0, jobs=1): err= 0: pid=701734: Fri Nov 15 11:17:17 2024 00:37:00.087 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.1MiB/10003msec) 00:37:00.087 slat (usec): min=5, max=108, avg=21.39, stdev=15.69 00:37:00.087 clat (usec): min=4893, max=45960, avg=23736.25, stdev=2487.63 00:37:00.087 lat (usec): min=4899, max=45976, avg=23757.64, stdev=2488.94 00:37:00.087 clat percentiles (usec): 00:37:00.087 | 1.00th=[15008], 5.00th=[21103], 10.00th=[23200], 20.00th=[23462], 00:37:00.087 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:00.087 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:37:00.087 | 99.00th=[31851], 99.50th=[36963], 99.90th=[45876], 99.95th=[45876], 00:37:00.087 | 99.99th=[45876] 00:37:00.087 bw ( KiB/s): min= 2432, max= 2768, per=4.15%, avg=2665.16, stdev=70.98, samples=19 00:37:00.087 iops : min= 608, max= 692, avg=666.21, stdev=17.71, samples=19 00:37:00.087 lat (msec) : 10=0.36%, 20=3.93%, 50=95.71% 00:37:00.087 cpu : usr=98.51%, sys=0.90%, ctx=119, majf=0, minf=34 00:37:00.087 IO depths : 1=3.2%, 2=7.1%, 4=17.2%, 8=61.7%, 16=10.8%, 32=0.0%, >=64=0.0% 00:37:00.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 complete : 0=0.0%, 4=92.7%, 8=2.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.087 filename2: (groupid=0, jobs=1): err= 0: pid=701735: Fri Nov 15 11:17:17 2024 00:37:00.087 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10003msec) 00:37:00.087 slat (nsec): min=5296, max=90962, avg=22174.03, stdev=15517.69 00:37:00.087 clat (usec): min=5220, max=57081, avg=23628.62, stdev=2652.09 00:37:00.087 lat (usec): min=5226, max=57097, avg=23650.79, stdev=2653.49 00:37:00.087 clat percentiles (usec): 00:37:00.087 | 1.00th=[14353], 5.00th=[19530], 10.00th=[22938], 20.00th=[23462], 00:37:00.087 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:00.087 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:37:00.087 | 99.00th=[32375], 99.50th=[34341], 99.90th=[44827], 99.95th=[44827], 00:37:00.087 | 99.99th=[56886] 00:37:00.087 bw ( KiB/s): min= 2452, max= 3024, per=4.17%, avg=2675.89, stdev=115.18, samples=19 00:37:00.087 iops : min= 613, max= 756, avg=668.89, stdev=28.80, samples=19 00:37:00.087 lat (msec) : 10=0.24%, 20=4.96%, 50=94.77%, 100=0.03% 00:37:00.087 cpu : usr=98.96%, sys=0.68%, ctx=49, majf=0, minf=18 00:37:00.087 IO depths : 1=4.6%, 2=9.5%, 4=20.7%, 8=56.7%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:00.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.087 issued rwts: total=6716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:00.087 00:37:00.087 Run status group 0 (all jobs): 00:37:00.087 READ: bw=62.7MiB/s (65.7MB/s), 2629KiB/s-2763KiB/s (2692kB/s-2829kB/s), io=629MiB (660MB), run=10003-10045msec 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 bdev_null0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.087 [2024-11-15 11:17:18.231958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:00.087 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.088 bdev_null1 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:00.088 { 00:37:00.088 "params": { 00:37:00.088 "name": "Nvme$subsystem", 00:37:00.088 "trtype": "$TEST_TRANSPORT", 00:37:00.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.088 "adrfam": "ipv4", 00:37:00.088 "trsvcid": "$NVMF_PORT", 00:37:00.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.088 "hdgst": ${hdgst:-false}, 00:37:00.088 "ddgst": ${ddgst:-false} 00:37:00.088 }, 00:37:00.088 "method": "bdev_nvme_attach_controller" 00:37:00.088 } 00:37:00.088 EOF 00:37:00.088 )") 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:00.088 { 00:37:00.088 "params": { 00:37:00.088 "name": "Nvme$subsystem", 00:37:00.088 "trtype": "$TEST_TRANSPORT", 00:37:00.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.088 "adrfam": "ipv4", 00:37:00.088 "trsvcid": "$NVMF_PORT", 00:37:00.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.088 "hdgst": ${hdgst:-false}, 00:37:00.088 "ddgst": ${ddgst:-false} 00:37:00.088 }, 00:37:00.088 "method": "bdev_nvme_attach_controller" 00:37:00.088 } 00:37:00.088 EOF 00:37:00.088 )") 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:00.088 "params": { 00:37:00.088 "name": "Nvme0", 00:37:00.088 "trtype": "tcp", 00:37:00.088 "traddr": "10.0.0.2", 00:37:00.088 "adrfam": "ipv4", 00:37:00.088 "trsvcid": "4420", 00:37:00.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:00.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:00.088 "hdgst": false, 00:37:00.088 "ddgst": false 00:37:00.088 }, 00:37:00.088 "method": "bdev_nvme_attach_controller" 00:37:00.088 },{ 00:37:00.088 "params": { 00:37:00.088 "name": "Nvme1", 00:37:00.088 "trtype": "tcp", 00:37:00.088 "traddr": "10.0.0.2", 00:37:00.088 "adrfam": "ipv4", 00:37:00.088 "trsvcid": "4420", 00:37:00.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:00.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:00.088 "hdgst": false, 00:37:00.088 "ddgst": false 00:37:00.088 }, 00:37:00.088 "method": "bdev_nvme_attach_controller" 00:37:00.088 }' 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:00.088 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.088 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:00.088 ... 00:37:00.088 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:00.088 ... 00:37:00.088 fio-3.35 00:37:00.088 Starting 4 threads 00:37:05.386 00:37:05.386 filename0: (groupid=0, jobs=1): err= 0: pid=704228: Fri Nov 15 11:17:24 2024 00:37:05.386 read: IOPS=2902, BW=22.7MiB/s (23.8MB/s)(113MiB/5002msec) 00:37:05.386 slat (nsec): min=5482, max=85663, avg=6087.35, stdev=2185.38 00:37:05.386 clat (usec): min=1459, max=4751, avg=2740.59, stdev=362.45 00:37:05.386 lat (usec): min=1481, max=4757, avg=2746.67, stdev=362.32 00:37:05.386 clat percentiles (usec): 00:37:05.386 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:37:05.386 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:05.386 | 70.00th=[ 2802], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3490], 00:37:05.386 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 4621], 00:37:05.386 | 99.99th=[ 4752] 00:37:05.386 bw ( KiB/s): min=23008, max=23504, per=24.75%, avg=23235.56, stdev=150.26, samples=9 00:37:05.386 iops : min= 2876, max= 2938, avg=2904.44, stdev=18.78, samples=9 00:37:05.386 lat (msec) : 2=0.88%, 4=97.18%, 10=1.94% 00:37:05.386 cpu : usr=96.38%, sys=3.40%, ctx=9, majf=0, minf=39 00:37:05.386 IO depths : 1=0.1%, 2=0.2%, 4=70.7%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.386 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.386 issued rwts: total=14516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.386 filename0: (groupid=0, jobs=1): err= 0: pid=704229: Fri Nov 15 11:17:24 2024 00:37:05.386 read: IOPS=2900, BW=22.7MiB/s (23.8MB/s)(113MiB/5001msec) 00:37:05.386 slat (nsec): min=5487, max=65216, avg=6133.09, stdev=2099.52 00:37:05.386 clat (usec): min=1353, max=5514, avg=2742.09, stdev=306.90 00:37:05.386 lat (usec): min=1359, max=5547, avg=2748.22, stdev=307.03 00:37:05.386 clat percentiles (usec): 00:37:05.386 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2540], 00:37:05.386 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:05.386 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3228], 00:37:05.386 | 99.00th=[ 4047], 99.50th=[ 4178], 99.90th=[ 4490], 99.95th=[ 5407], 00:37:05.386 | 99.99th=[ 5473] 00:37:05.386 bw ( KiB/s): min=22928, max=23424, per=24.70%, avg=23189.33, stdev=183.30, samples=9 00:37:05.386 iops : min= 2866, max= 2928, avg=2898.67, stdev=22.91, samples=9 00:37:05.386 lat (msec) : 2=0.52%, 4=98.38%, 10=1.10% 00:37:05.387 cpu : usr=96.50%, sys=3.28%, ctx=8, majf=0, minf=68 00:37:05.387 IO depths : 1=0.1%, 2=0.2%, 4=71.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.387 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.387 issued rwts: total=14503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.387 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.387 filename1: (groupid=0, jobs=1): err= 0: pid=704230: Fri Nov 15 11:17:24 2024 00:37:05.387 read: IOPS=2873, BW=22.5MiB/s (23.5MB/s)(112MiB/5001msec) 00:37:05.387 slat (nsec): min=5482, max=65146, avg=6045.03, stdev=1979.48 00:37:05.387 clat (usec): min=1003, max=6216, avg=2767.34, stdev=319.05 00:37:05.387 lat (usec): min=1009, max=6249, avg=2773.39, stdev=319.16 00:37:05.387 clat percentiles (usec): 00:37:05.387 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2573], 00:37:05.387 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:05.387 | 70.00th=[ 2802], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3294], 00:37:05.387 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 4948], 00:37:05.387 | 99.99th=[ 6128] 00:37:05.387 bw ( KiB/s): min=22525, max=23280, per=24.47%, avg=22973.89, stdev=239.42, samples=9 00:37:05.387 iops : min= 2815, max= 2910, avg=2871.67, stdev=30.07, samples=9 00:37:05.387 lat (msec) : 2=0.56%, 4=98.06%, 10=1.38% 00:37:05.387 cpu : usr=96.54%, sys=3.22%, ctx=8, majf=0, minf=39 00:37:05.387 IO depths : 1=0.1%, 2=0.1%, 4=73.0%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.387 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.387 issued rwts: total=14371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.387 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.387 filename1: (groupid=0, jobs=1): err= 0: pid=704232: Fri Nov 15 11:17:24 2024 00:37:05.387 read: IOPS=3062, BW=23.9MiB/s (25.1MB/s)(120MiB/5002msec) 00:37:05.387 slat (nsec): min=5483, max=83582, avg=6090.56, stdev=2022.82 00:37:05.387 clat (usec): min=982, max=4446, avg=2596.49, stdev=390.74 00:37:05.387 lat (usec): min=987, max=4452, avg=2602.58, stdev=390.76 00:37:05.387 clat percentiles (usec): 00:37:05.387 | 1.00th=[ 1729], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2278], 00:37:05.387 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2704], 00:37:05.387 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 3097], 95.00th=[ 3392], 00:37:05.387 | 99.00th=[ 3687], 99.50th=[ 3884], 99.90th=[ 4146], 99.95th=[ 4293], 00:37:05.387 | 99.99th=[ 4424] 00:37:05.387 bw ( KiB/s): min=24064, max=24768, per=26.10%, avg=24508.44, stdev=263.37, samples=9 00:37:05.387 iops : min= 3008, max= 3096, avg=3063.56, stdev=32.92, samples=9 00:37:05.387 lat (usec) : 1000=0.02% 00:37:05.387 lat (msec) : 2=3.69%, 4=95.89%, 10=0.40% 00:37:05.387 cpu : usr=94.44%, sys=4.06%, ctx=356, majf=0, minf=51 00:37:05.387 IO depths : 1=0.1%, 2=0.5%, 4=69.7%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.387 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.387 issued rwts: total=15318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.387 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:05.387 00:37:05.387 Run status group 0 (all jobs): 00:37:05.387 READ: bw=91.7MiB/s (96.1MB/s), 22.5MiB/s-23.9MiB/s (23.5MB/s-25.1MB/s), io=459MiB (481MB), run=5001-5002msec 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 00:37:05.387 real 0m24.474s 00:37:05.387 user 5m15.319s 00:37:05.387 sys 0m4.432s 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 ************************************ 00:37:05.387 END TEST fio_dif_rand_params 00:37:05.387 ************************************ 00:37:05.387 11:17:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:05.387 11:17:24 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:05.387 11:17:24 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 ************************************ 00:37:05.387 START TEST fio_dif_digest 00:37:05.387 ************************************ 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 bdev_null0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:05.387 [2024-11-15 11:17:24.779719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.387 { 00:37:05.387 "params": { 00:37:05.387 "name": "Nvme$subsystem", 00:37:05.387 "trtype": "$TEST_TRANSPORT", 00:37:05.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.387 "adrfam": "ipv4", 00:37:05.387 "trsvcid": "$NVMF_PORT", 00:37:05.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.387 "hdgst": ${hdgst:-false}, 00:37:05.387 "ddgst": ${ddgst:-false} 00:37:05.387 }, 00:37:05.387 "method": "bdev_nvme_attach_controller" 00:37:05.387 } 00:37:05.387 EOF 00:37:05.387 )") 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:05.387 "params": { 00:37:05.387 "name": "Nvme0", 00:37:05.387 "trtype": "tcp", 00:37:05.387 "traddr": "10.0.0.2", 00:37:05.387 "adrfam": "ipv4", 00:37:05.387 "trsvcid": "4420", 00:37:05.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.387 "hdgst": true, 00:37:05.387 "ddgst": true 00:37:05.387 }, 00:37:05.387 "method": "bdev_nvme_attach_controller" 00:37:05.387 }' 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:05.387 11:17:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.957 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:05.957 ... 00:37:05.957 fio-3.35 00:37:05.957 Starting 3 threads 00:37:18.189 00:37:18.189 filename0: (groupid=0, jobs=1): err= 0: pid=705438: Fri Nov 15 11:17:35 2024 00:37:18.189 read: IOPS=407, BW=51.0MiB/s (53.4MB/s)(512MiB/10047msec) 00:37:18.189 slat (nsec): min=5902, max=32657, avg=6486.27, stdev=958.18 00:37:18.189 clat (usec): min=3744, max=47421, avg=7337.90, stdev=1904.99 00:37:18.189 lat (usec): min=3751, max=47427, avg=7344.38, stdev=1905.07 00:37:18.189 clat percentiles (usec): 00:37:18.189 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 5735], 00:37:18.189 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7504], 00:37:18.189 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10159], 00:37:18.189 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12256], 99.95th=[13304], 00:37:18.189 | 99.99th=[47449] 00:37:18.189 bw ( KiB/s): min=46080, max=59904, per=52.65%, avg=52574.32, stdev=3628.63, samples=19 00:37:18.189 iops : min= 360, max= 468, avg=410.74, stdev=28.35, samples=19 00:37:18.189 lat (msec) : 4=0.12%, 10=92.97%, 20=6.86%, 50=0.05% 00:37:18.189 cpu : usr=92.78%, sys=6.99%, ctx=24, majf=0, minf=197 00:37:18.189 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.189 issued rwts: total=4097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.189 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:18.189 filename0: (groupid=0, jobs=1): err= 0: pid=705439: Fri Nov 15 11:17:35 2024 00:37:18.189 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10046msec) 00:37:18.189 slat (nsec): min=5866, max=32436, avg=6649.52, stdev=1279.57 00:37:18.189 clat (msec): min=6, max=129, avg=15.80, stdev=16.58 00:37:18.189 lat (msec): min=6, max=129, avg=15.81, stdev=16.58 00:37:18.189 clat percentiles (msec): 00:37:18.189 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:37:18.189 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:37:18.189 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 50], 95.00th=[ 51], 00:37:18.189 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 93], 99.95th=[ 130], 00:37:18.189 | 99.99th=[ 130] 00:37:18.189 bw ( KiB/s): min=12800, max=34304, per=24.37%, avg=24332.80, stdev=6028.09, samples=20 00:37:18.189 iops : min= 100, max= 268, avg=190.10, stdev=47.09, samples=20 00:37:18.189 lat (msec) : 10=55.39%, 20=30.53%, 50=6.62%, 100=7.41%, 250=0.05% 00:37:18.189 cpu : usr=95.16%, sys=4.62%, ctx=14, majf=0, minf=41 00:37:18.189 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.189 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.189 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:18.189 filename0: (groupid=0, jobs=1): err= 0: pid=705440: Fri Nov 15 11:17:35 2024 00:37:18.189 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(230MiB/10003msec) 00:37:18.189 slat (nsec): min=5873, max=32276, avg=7956.95, stdev=1618.04 00:37:18.189 clat (msec): min=5, max=132, avg=16.32, stdev=16.62 00:37:18.189 lat (msec): min=5, max=132, avg=16.32, stdev=16.62 00:37:18.189 clat percentiles (msec): 00:37:18.189 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:37:18.189 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:37:18.189 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 50], 95.00th=[ 52], 00:37:18.189 | 99.00th=[ 90], 99.50th=[ 91], 99.90th=[ 94], 99.95th=[ 133], 00:37:18.189 | 99.99th=[ 133] 00:37:18.189 bw ( KiB/s): min=16640, max=32256, per=23.60%, avg=23565.47, stdev=5024.15, samples=19 00:37:18.189 iops : min= 130, max= 252, avg=184.11, stdev=39.25, samples=19 00:37:18.189 lat (msec) : 10=43.53%, 20=41.78%, 50=5.55%, 100=9.09%, 250=0.05% 00:37:18.189 cpu : usr=94.88%, sys=4.89%, ctx=12, majf=0, minf=80 00:37:18.189 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.190 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:18.190 00:37:18.190 Run status group 0 (all jobs): 00:37:18.190 READ: bw=97.5MiB/s (102MB/s), 23.0MiB/s-51.0MiB/s (24.1MB/s-53.4MB/s), io=980MiB (1027MB), run=10003-10047msec 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.190 00:37:18.190 real 0m11.263s 00:37:18.190 user 0m40.846s 00:37:18.190 sys 0m1.977s 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:18.190 11:17:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 ************************************ 00:37:18.190 END TEST fio_dif_digest 00:37:18.190 ************************************ 00:37:18.190 11:17:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:18.190 11:17:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.190 rmmod nvme_tcp 00:37:18.190 rmmod nvme_fabrics 00:37:18.190 rmmod nvme_keyring 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 695186 ']' 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 695186 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 695186 ']' 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 695186 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 695186 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 695186' 00:37:18.190 killing process with pid 695186 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@971 -- # kill 695186 00:37:18.190 11:17:36 nvmf_dif -- common/autotest_common.sh@976 -- # wait 695186 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:18.190 11:17:36 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:20.737 Waiting for block devices as requested 00:37:20.737 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:20.737 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:20.737 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:20.737 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:20.737 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:20.737 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.737 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.998 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:20.998 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:21.259 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:21.259 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:21.259 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:21.520 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:21.520 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:21.520 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:21.520 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:21.780 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:22.042 11:17:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.042 11:17:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:22.042 11:17:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.587 11:17:43 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:24.587 00:37:24.587 real 1m18.508s 00:37:24.587 user 7m54.002s 00:37:24.587 sys 0m22.053s 00:37:24.587 11:17:43 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:24.587 11:17:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:24.587 ************************************ 00:37:24.587 END TEST nvmf_dif 00:37:24.587 ************************************ 00:37:24.587 11:17:43 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:24.587 11:17:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:24.587 11:17:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:24.587 11:17:43 -- common/autotest_common.sh@10 -- # set +x 00:37:24.587 ************************************ 00:37:24.587 START TEST nvmf_abort_qd_sizes 00:37:24.587 ************************************ 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:24.587 * Looking for test storage... 00:37:24.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:24.587 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:24.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.588 --rc genhtml_branch_coverage=1 00:37:24.588 --rc genhtml_function_coverage=1 00:37:24.588 --rc genhtml_legend=1 00:37:24.588 --rc geninfo_all_blocks=1 00:37:24.588 --rc geninfo_unexecuted_blocks=1 00:37:24.588 00:37:24.588 ' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:24.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.588 --rc genhtml_branch_coverage=1 00:37:24.588 --rc genhtml_function_coverage=1 00:37:24.588 --rc genhtml_legend=1 00:37:24.588 --rc geninfo_all_blocks=1 00:37:24.588 --rc geninfo_unexecuted_blocks=1 00:37:24.588 00:37:24.588 ' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:24.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.588 --rc genhtml_branch_coverage=1 00:37:24.588 --rc genhtml_function_coverage=1 00:37:24.588 --rc genhtml_legend=1 00:37:24.588 --rc geninfo_all_blocks=1 00:37:24.588 --rc geninfo_unexecuted_blocks=1 00:37:24.588 00:37:24.588 ' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:24.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.588 --rc genhtml_branch_coverage=1 00:37:24.588 --rc genhtml_function_coverage=1 00:37:24.588 --rc genhtml_legend=1 00:37:24.588 --rc geninfo_all_blocks=1 00:37:24.588 --rc geninfo_unexecuted_blocks=1 00:37:24.588 00:37:24.588 ' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:24.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:24.588 11:17:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:32.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:32.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:32.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:32.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:32.731 11:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:32.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:32.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:37:32.731 00:37:32.731 --- 10.0.0.2 ping statistics --- 00:37:32.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.731 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:37:32.731 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:32.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:32.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:37:32.732 00:37:32.732 --- 10.0.0.1 ping statistics --- 00:37:32.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.732 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:37:32.732 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:32.732 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:32.732 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:32.732 11:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:35.277 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:35.278 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:35.538 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:35.538 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:35.538 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:35.538 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:35.538 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:35.538 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=714929 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 714929 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 714929 ']' 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:35.799 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.059 [2024-11-15 11:17:55.365872] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:37:36.059 [2024-11-15 11:17:55.365936] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.059 [2024-11-15 11:17:55.465407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:36.059 [2024-11-15 11:17:55.520822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.059 [2024-11-15 11:17:55.520873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.059 [2024-11-15 11:17:55.520882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.059 [2024-11-15 11:17:55.520889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.059 [2024-11-15 11:17:55.520896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.059 [2024-11-15 11:17:55.523090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.059 [2024-11-15 11:17:55.523256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.059 [2024-11-15 11:17:55.523420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.059 [2024-11-15 11:17:55.523420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:37.001 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:37.002 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:37.002 ************************************ 00:37:37.002 START TEST spdk_target_abort 00:37:37.002 ************************************ 00:37:37.002 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:37.002 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:37.002 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:37.002 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.002 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.263 spdk_targetn1 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.263 [2024-11-15 11:17:56.580360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:37.263 [2024-11-15 11:17:56.628761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.263 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.264 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.525 [2024-11-15 11:17:56.943690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:40 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:56.943738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:56.959206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:472 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:56.959240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003e p:1 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:56.974210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:904 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:56.974241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0073 p:1 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:56.998113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1608 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:56.998147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00cc p:1 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:57.006772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1896 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:57.006811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:57.021078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2312 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:57.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:57.029125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2520 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:57.029152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:37.525 [2024-11-15 11:17:57.029244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2544 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:37.525 [2024-11-15 11:17:57.029255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:37.785 [2024-11-15 11:17:57.055253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3392 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:37.785 [2024-11-15 11:17:57.055285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00aa p:0 m:0 dnr:0 00:37:37.785 [2024-11-15 11:17:57.069246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3744 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:37.786 [2024-11-15 11:17:57.069275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d5 p:0 m:0 dnr:0 00:37:37.786 [2024-11-15 11:17:57.077223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3960 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:37.786 [2024-11-15 11:17:57.077252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:37:41.089 Initializing NVMe Controllers 00:37:41.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:41.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:41.089 Initialization complete. Launching workers. 00:37:41.089 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11962, failed: 11 00:37:41.089 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3315, failed to submit 8658 00:37:41.089 success 745, unsuccessful 2570, failed 0 00:37:41.089 11:18:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:41.089 11:18:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:41.089 [2024-11-15 11:18:00.217922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:41.089 [2024-11-15 11:18:00.217963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:37:41.089 [2024-11-15 11:18:00.240763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:824 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:41.089 [2024-11-15 11:18:00.240786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0076 p:1 m:0 dnr:0 00:37:41.089 [2024-11-15 11:18:00.264774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:1432 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:41.089 [2024-11-15 11:18:00.264797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00bb p:1 m:0 dnr:0 00:37:41.089 [2024-11-15 11:18:00.349313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:3504 len:8 PRP1 0x200004e52000 PRP2 0x0 00:37:41.089 [2024-11-15 11:18:00.349342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:37:41.089 [2024-11-15 11:18:00.364154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:3856 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:41.089 [2024-11-15 11:18:00.364175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:37:44.382 Initializing NVMe Controllers 00:37:44.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:44.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:44.382 Initialization complete. Launching workers. 00:37:44.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8539, failed: 5 00:37:44.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1199, failed to submit 7345 00:37:44.382 success 359, unsuccessful 840, failed 0 00:37:44.382 11:18:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:44.382 11:18:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:45.762 [2024-11-15 11:18:05.143451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:165 nsid:1 lba:194400 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:45.762 [2024-11-15 11:18:05.143481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:165 cdw0:0 sqhd:00b4 p:0 m:0 dnr:0 00:37:47.169 Initializing NVMe Controllers 00:37:47.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.169 Initialization complete. Launching workers. 00:37:47.169 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43941, failed: 1 00:37:47.169 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2736, failed to submit 41206 00:37:47.169 success 593, unsuccessful 2143, failed 0 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.169 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 714929 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 714929 ']' 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 714929 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 714929 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 714929' 00:37:49.077 killing process with pid 714929 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 714929 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 714929 00:37:49.077 00:37:49.077 real 0m12.269s 00:37:49.077 user 0m49.862s 00:37:49.077 sys 0m2.050s 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:49.077 11:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.077 ************************************ 00:37:49.077 END TEST spdk_target_abort 00:37:49.077 ************************************ 00:37:49.077 11:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:49.077 11:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:49.077 11:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:49.077 11:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.338 ************************************ 00:37:49.338 START TEST kernel_target_abort 00:37:49.338 ************************************ 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:49.338 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:52.641 Waiting for block devices as requested 00:37:52.641 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:52.641 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:52.902 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:52.902 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:52.902 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:53.162 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:53.162 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:53.162 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:53.162 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:53.422 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:53.422 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:53.682 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:53.682 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:53.682 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:53.943 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:53.943 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:53.943 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:54.517 No valid GPT data, bailing 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:54.517 00:37:54.517 Discovery Log Number of Records 2, Generation counter 2 00:37:54.517 =====Discovery Log Entry 0====== 00:37:54.517 trtype: tcp 00:37:54.517 adrfam: ipv4 00:37:54.517 subtype: current discovery subsystem 00:37:54.517 treq: not specified, sq flow control disable supported 00:37:54.517 portid: 1 00:37:54.517 trsvcid: 4420 00:37:54.517 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:54.517 traddr: 10.0.0.1 00:37:54.517 eflags: none 00:37:54.517 sectype: none 00:37:54.517 =====Discovery Log Entry 1====== 00:37:54.517 trtype: tcp 00:37:54.517 adrfam: ipv4 00:37:54.517 subtype: nvme subsystem 00:37:54.517 treq: not specified, sq flow control disable supported 00:37:54.517 portid: 1 00:37:54.517 trsvcid: 4420 00:37:54.517 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:54.517 traddr: 10.0.0.1 00:37:54.517 eflags: none 00:37:54.517 sectype: none 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:54.517 11:18:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:57.842 Initializing NVMe Controllers 00:37:57.842 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:57.842 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:57.842 Initialization complete. Launching workers. 00:37:57.842 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67988, failed: 0 00:37:57.842 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67988, failed to submit 0 00:37:57.842 success 0, unsuccessful 67988, failed 0 00:37:57.842 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:57.842 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:01.145 Initializing NVMe Controllers 00:38:01.145 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:01.145 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:01.145 Initialization complete. Launching workers. 00:38:01.145 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 115217, failed: 0 00:38:01.145 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28990, failed to submit 86227 00:38:01.145 success 0, unsuccessful 28990, failed 0 00:38:01.145 11:18:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:01.145 11:18:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:04.446 Initializing NVMe Controllers 00:38:04.446 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:04.446 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:04.446 Initialization complete. Launching workers. 00:38:04.446 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145589, failed: 0 00:38:04.446 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36438, failed to submit 109151 00:38:04.446 success 0, unsuccessful 36438, failed 0 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:04.446 11:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:07.756 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:07.756 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:09.155 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:09.728 00:38:09.728 real 0m20.358s 00:38:09.728 user 0m10.020s 00:38:09.728 sys 0m5.996s 00:38:09.728 11:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:09.728 11:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.728 ************************************ 00:38:09.728 END TEST kernel_target_abort 00:38:09.728 ************************************ 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:09.728 rmmod nvme_tcp 00:38:09.728 rmmod nvme_fabrics 00:38:09.728 rmmod nvme_keyring 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 714929 ']' 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 714929 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 714929 ']' 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 714929 00:38:09.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (714929) - No such process 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 714929 is not found' 00:38:09.728 Process with pid 714929 is not found 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:09.728 11:18:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:13.033 Waiting for block devices as requested 00:38:13.033 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:13.033 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:13.295 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:13.295 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:13.295 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:13.557 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:13.557 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:13.557 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:13.819 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:13.819 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:14.081 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:14.081 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:14.081 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:14.342 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:14.342 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:14.342 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:14.603 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:14.864 11:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:16.779 11:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:16.779 00:38:16.779 real 0m52.675s 00:38:16.779 user 1m5.235s 00:38:16.779 sys 0m19.332s 00:38:16.779 11:18:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:16.779 11:18:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:16.779 ************************************ 00:38:16.779 END TEST nvmf_abort_qd_sizes 00:38:16.779 ************************************ 00:38:17.040 11:18:36 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:17.040 11:18:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:17.040 11:18:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:17.040 11:18:36 -- common/autotest_common.sh@10 -- # set +x 00:38:17.040 ************************************ 00:38:17.040 START TEST keyring_file 00:38:17.040 ************************************ 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:17.040 * Looking for test storage... 00:38:17.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:17.040 11:18:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:17.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.040 --rc genhtml_branch_coverage=1 00:38:17.040 --rc genhtml_function_coverage=1 00:38:17.040 --rc genhtml_legend=1 00:38:17.040 --rc geninfo_all_blocks=1 00:38:17.040 --rc geninfo_unexecuted_blocks=1 00:38:17.040 00:38:17.040 ' 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:17.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.040 --rc genhtml_branch_coverage=1 00:38:17.040 --rc genhtml_function_coverage=1 00:38:17.040 --rc genhtml_legend=1 00:38:17.040 --rc geninfo_all_blocks=1 00:38:17.040 --rc geninfo_unexecuted_blocks=1 00:38:17.040 00:38:17.040 ' 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:17.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.040 --rc genhtml_branch_coverage=1 00:38:17.040 --rc genhtml_function_coverage=1 00:38:17.040 --rc genhtml_legend=1 00:38:17.040 --rc geninfo_all_blocks=1 00:38:17.040 --rc geninfo_unexecuted_blocks=1 00:38:17.040 00:38:17.040 ' 00:38:17.040 11:18:36 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:17.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.040 --rc genhtml_branch_coverage=1 00:38:17.040 --rc genhtml_function_coverage=1 00:38:17.040 --rc genhtml_legend=1 00:38:17.040 --rc geninfo_all_blocks=1 00:38:17.040 --rc geninfo_unexecuted_blocks=1 00:38:17.040 00:38:17.040 ' 00:38:17.040 11:18:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:17.040 11:18:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:17.040 11:18:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:17.302 11:18:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:17.302 11:18:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:17.302 11:18:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:17.302 11:18:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:17.302 11:18:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.302 11:18:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.302 11:18:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.302 11:18:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:17.302 11:18:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:17.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2fCVUif9LT 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2fCVUif9LT 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2fCVUif9LT 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2fCVUif9LT 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pw74yhRiAp 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:17.302 11:18:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pw74yhRiAp 00:38:17.302 11:18:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pw74yhRiAp 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pw74yhRiAp 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=725965 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 725965 00:38:17.302 11:18:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:17.302 11:18:36 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 725965 ']' 00:38:17.302 11:18:36 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:17.302 11:18:36 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:17.302 11:18:36 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:17.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:17.302 11:18:36 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:17.302 11:18:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.302 [2024-11-15 11:18:36.772554] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:38:17.302 [2024-11-15 11:18:36.772636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725965 ] 00:38:17.563 [2024-11-15 11:18:36.864973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.563 [2024-11-15 11:18:36.917254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:18.135 11:18:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:18.135 [2024-11-15 11:18:37.591166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:18.135 null0 00:38:18.135 [2024-11-15 11:18:37.623210] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:18.135 [2024-11-15 11:18:37.623579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.135 11:18:37 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.135 11:18:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:18.135 [2024-11-15 11:18:37.655269] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:18.135 request: 00:38:18.135 { 00:38:18.135 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:18.135 "secure_channel": false, 00:38:18.135 "listen_address": { 00:38:18.135 "trtype": "tcp", 00:38:18.135 "traddr": "127.0.0.1", 00:38:18.135 "trsvcid": "4420" 00:38:18.135 }, 00:38:18.135 "method": "nvmf_subsystem_add_listener", 00:38:18.135 "req_id": 1 00:38:18.135 } 00:38:18.396 Got JSON-RPC error response 00:38:18.396 response: 00:38:18.396 { 00:38:18.396 "code": -32602, 00:38:18.396 "message": "Invalid parameters" 00:38:18.396 } 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:18.396 11:18:37 keyring_file -- keyring/file.sh@47 -- # bperfpid=725985 00:38:18.396 11:18:37 keyring_file -- keyring/file.sh@49 -- # waitforlisten 725985 /var/tmp/bperf.sock 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 725985 ']' 00:38:18.396 11:18:37 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:18.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:18.396 11:18:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:18.396 [2024-11-15 11:18:37.718745] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:38:18.396 [2024-11-15 11:18:37.718808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725985 ] 00:38:18.396 [2024-11-15 11:18:37.811974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.396 [2024-11-15 11:18:37.864942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.341 11:18:38 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:19.341 11:18:38 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:19.341 11:18:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:19.341 11:18:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:19.341 11:18:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pw74yhRiAp 00:38:19.341 11:18:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pw74yhRiAp 00:38:19.601 11:18:38 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:19.601 11:18:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:19.601 11:18:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.601 11:18:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.601 11:18:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.601 11:18:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.2fCVUif9LT == \/\t\m\p\/\t\m\p\.\2\f\C\V\U\i\f\9\L\T ]] 00:38:19.601 11:18:39 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:19.601 11:18:39 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:19.601 11:18:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:19.601 11:18:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.601 11:18:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.862 11:18:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.pw74yhRiAp == \/\t\m\p\/\t\m\p\.\p\w\7\4\y\h\R\i\A\p ]] 00:38:19.862 11:18:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:19.862 11:18:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:19.862 11:18:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.862 11:18:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.862 11:18:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.862 11:18:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.123 11:18:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:20.123 11:18:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:20.123 11:18:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:20.123 11:18:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:20.123 11:18:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.123 11:18:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:20.123 11:18:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.385 11:18:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:20.385 11:18:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:20.385 11:18:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:20.385 [2024-11-15 11:18:39.867383] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:20.646 nvme0n1 00:38:20.646 11:18:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:20.646 11:18:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:20.646 11:18:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:20.646 11:18:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.646 11:18:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:20.646 11:18:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.646 11:18:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:20.646 11:18:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:20.646 11:18:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:20.646 11:18:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:20.646 11:18:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.646 11:18:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:20.646 11:18:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.907 11:18:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:20.907 11:18:40 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:21.166 Running I/O for 1 seconds... 00:38:22.105 23498.00 IOPS, 91.79 MiB/s 00:38:22.105 Latency(us) 00:38:22.105 [2024-11-15T10:18:41.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.105 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:22.105 nvme0n1 : 1.00 23548.89 91.99 0.00 0.00 5425.33 2102.61 16165.55 00:38:22.105 [2024-11-15T10:18:41.632Z] =================================================================================================================== 00:38:22.105 [2024-11-15T10:18:41.632Z] Total : 23548.89 91.99 0.00 0.00 5425.33 2102.61 16165.55 00:38:22.105 { 00:38:22.105 "results": [ 00:38:22.105 { 00:38:22.105 "job": "nvme0n1", 00:38:22.105 "core_mask": "0x2", 00:38:22.105 "workload": "randrw", 00:38:22.105 "percentage": 50, 00:38:22.105 "status": "finished", 00:38:22.105 "queue_depth": 128, 00:38:22.105 "io_size": 4096, 00:38:22.105 "runtime": 1.003402, 00:38:22.105 "iops": 23548.88668748916, 00:38:22.105 "mibps": 91.98783862300454, 00:38:22.105 "io_failed": 0, 00:38:22.105 "io_timeout": 0, 00:38:22.105 "avg_latency_us": 5425.333349133128, 00:38:22.105 "min_latency_us": 2102.6133333333332, 00:38:22.105 "max_latency_us": 16165.546666666667 00:38:22.105 } 00:38:22.105 ], 00:38:22.105 "core_count": 1 00:38:22.105 } 00:38:22.105 11:18:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:22.105 11:18:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:22.366 11:18:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.366 11:18:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:22.366 11:18:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.366 11:18:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.627 11:18:41 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:22.627 11:18:41 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:22.627 11:18:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.627 11:18:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.888 [2024-11-15 11:18:42.155665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:22.888 [2024-11-15 11:18:42.155932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87740 (107): Transport endpoint is not connected 00:38:22.888 [2024-11-15 11:18:42.156927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87740 (9): Bad file descriptor 00:38:22.888 [2024-11-15 11:18:42.157929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:22.888 [2024-11-15 11:18:42.157936] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:22.888 [2024-11-15 11:18:42.157941] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:22.888 [2024-11-15 11:18:42.157947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:22.888 request: 00:38:22.888 { 00:38:22.888 "name": "nvme0", 00:38:22.888 "trtype": "tcp", 00:38:22.888 "traddr": "127.0.0.1", 00:38:22.888 "adrfam": "ipv4", 00:38:22.888 "trsvcid": "4420", 00:38:22.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:22.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:22.888 "prchk_reftag": false, 00:38:22.888 "prchk_guard": false, 00:38:22.888 "hdgst": false, 00:38:22.888 "ddgst": false, 00:38:22.888 "psk": "key1", 00:38:22.888 "allow_unrecognized_csi": false, 00:38:22.888 "method": "bdev_nvme_attach_controller", 00:38:22.888 "req_id": 1 00:38:22.888 } 00:38:22.888 Got JSON-RPC error response 00:38:22.888 response: 00:38:22.888 { 00:38:22.888 "code": -5, 00:38:22.888 "message": "Input/output error" 00:38:22.888 } 00:38:22.888 11:18:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:22.888 11:18:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:22.888 11:18:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:22.888 11:18:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:22.888 11:18:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.888 11:18:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:22.888 11:18:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.888 11:18:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.889 11:18:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.889 11:18:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.148 11:18:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:23.148 11:18:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:23.148 11:18:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:23.409 11:18:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:23.409 11:18:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:23.409 11:18:42 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:23.409 11:18:42 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:23.409 11:18:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.669 11:18:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:23.669 11:18:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.2fCVUif9LT 00:38:23.669 11:18:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.669 11:18:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:23.669 11:18:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:23.929 [2024-11-15 11:18:43.264788] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2fCVUif9LT': 0100660 00:38:23.929 [2024-11-15 11:18:43.264808] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:23.929 request: 00:38:23.929 { 00:38:23.929 "name": "key0", 00:38:23.929 "path": "/tmp/tmp.2fCVUif9LT", 00:38:23.929 "method": "keyring_file_add_key", 00:38:23.929 "req_id": 1 00:38:23.929 } 00:38:23.929 Got JSON-RPC error response 00:38:23.929 response: 00:38:23.929 { 00:38:23.929 "code": -1, 00:38:23.929 "message": "Operation not permitted" 00:38:23.929 } 00:38:23.929 11:18:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:23.929 11:18:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:23.929 11:18:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:23.929 11:18:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:23.929 11:18:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.2fCVUif9LT 00:38:23.929 11:18:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:23.929 11:18:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2fCVUif9LT 00:38:23.929 11:18:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.2fCVUif9LT 00:38:24.189 11:18:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:24.189 11:18:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.189 11:18:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.189 11:18:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.189 11:18:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.189 11:18:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.189 11:18:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:24.189 11:18:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:24.189 11:18:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.189 11:18:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.450 [2024-11-15 11:18:43.790121] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2fCVUif9LT': No such file or directory 00:38:24.450 [2024-11-15 11:18:43.790136] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:24.450 [2024-11-15 11:18:43.790148] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:24.450 [2024-11-15 11:18:43.790154] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:24.450 [2024-11-15 11:18:43.790159] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:24.450 [2024-11-15 11:18:43.790165] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:24.450 request: 00:38:24.450 { 00:38:24.450 "name": "nvme0", 00:38:24.450 "trtype": "tcp", 00:38:24.450 "traddr": "127.0.0.1", 00:38:24.450 "adrfam": "ipv4", 00:38:24.450 "trsvcid": "4420", 00:38:24.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.450 "prchk_reftag": false, 00:38:24.450 "prchk_guard": false, 00:38:24.450 "hdgst": false, 00:38:24.450 "ddgst": false, 00:38:24.450 "psk": "key0", 00:38:24.450 "allow_unrecognized_csi": false, 00:38:24.450 "method": "bdev_nvme_attach_controller", 00:38:24.450 "req_id": 1 00:38:24.450 } 00:38:24.450 Got JSON-RPC error response 00:38:24.450 response: 00:38:24.450 { 00:38:24.450 "code": -19, 00:38:24.450 "message": "No such device" 00:38:24.450 } 00:38:24.450 11:18:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:24.450 11:18:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:24.450 11:18:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:24.450 11:18:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:24.450 11:18:43 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:24.450 11:18:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:24.450 11:18:43 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:24.450 11:18:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:24.450 11:18:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:24.450 11:18:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:24.450 11:18:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:24.712 11:18:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:24.712 11:18:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sCe7AItOm2 00:38:24.712 11:18:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:24.712 11:18:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:24.712 11:18:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:24.712 11:18:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:24.712 11:18:43 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:24.712 11:18:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:24.712 11:18:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:24.712 11:18:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sCe7AItOm2 00:38:24.712 11:18:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sCe7AItOm2 00:38:24.712 11:18:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.sCe7AItOm2 00:38:24.712 11:18:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sCe7AItOm2 00:38:24.712 11:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sCe7AItOm2 00:38:24.712 11:18:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.712 11:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.973 nvme0n1 00:38:24.973 11:18:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:24.973 11:18:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.973 11:18:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.973 11:18:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.973 11:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.973 11:18:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.233 11:18:44 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:25.233 11:18:44 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:25.233 11:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:25.492 11:18:44 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:25.492 11:18:44 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.492 11:18:44 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:25.492 11:18:44 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.492 11:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.752 11:18:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:25.752 11:18:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:25.752 11:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:26.012 11:18:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:26.012 11:18:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:26.012 11:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.012 11:18:45 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:26.012 11:18:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sCe7AItOm2 00:38:26.012 11:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sCe7AItOm2 00:38:26.271 11:18:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pw74yhRiAp 00:38:26.271 11:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pw74yhRiAp 00:38:26.531 11:18:45 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.531 11:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.531 nvme0n1 00:38:26.831 11:18:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:26.831 11:18:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:26.831 11:18:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:26.831 "subsystems": [ 00:38:26.831 { 00:38:26.831 "subsystem": "keyring", 00:38:26.831 "config": [ 00:38:26.831 { 00:38:26.831 "method": "keyring_file_add_key", 00:38:26.831 "params": { 00:38:26.831 "name": "key0", 00:38:26.831 "path": "/tmp/tmp.sCe7AItOm2" 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "keyring_file_add_key", 00:38:26.831 "params": { 00:38:26.831 "name": "key1", 00:38:26.831 "path": "/tmp/tmp.pw74yhRiAp" 00:38:26.831 } 00:38:26.831 } 00:38:26.831 ] 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "subsystem": "iobuf", 00:38:26.831 "config": [ 00:38:26.831 { 00:38:26.831 "method": "iobuf_set_options", 00:38:26.831 "params": { 00:38:26.831 "small_pool_count": 8192, 00:38:26.831 "large_pool_count": 1024, 00:38:26.831 "small_bufsize": 8192, 00:38:26.831 "large_bufsize": 135168, 00:38:26.831 "enable_numa": false 00:38:26.831 } 00:38:26.831 } 00:38:26.831 ] 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "subsystem": "sock", 00:38:26.831 "config": [ 00:38:26.831 { 00:38:26.831 "method": "sock_set_default_impl", 00:38:26.831 "params": { 00:38:26.831 "impl_name": "posix" 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "sock_impl_set_options", 00:38:26.831 "params": { 00:38:26.831 "impl_name": "ssl", 00:38:26.831 "recv_buf_size": 4096, 00:38:26.831 "send_buf_size": 4096, 00:38:26.831 "enable_recv_pipe": true, 00:38:26.831 "enable_quickack": false, 00:38:26.831 "enable_placement_id": 0, 00:38:26.831 "enable_zerocopy_send_server": true, 00:38:26.831 "enable_zerocopy_send_client": false, 00:38:26.831 "zerocopy_threshold": 0, 00:38:26.831 "tls_version": 0, 00:38:26.831 "enable_ktls": false 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "sock_impl_set_options", 00:38:26.831 "params": { 00:38:26.831 "impl_name": "posix", 00:38:26.831 "recv_buf_size": 2097152, 00:38:26.831 "send_buf_size": 2097152, 00:38:26.831 "enable_recv_pipe": true, 00:38:26.831 "enable_quickack": false, 00:38:26.831 "enable_placement_id": 0, 00:38:26.831 "enable_zerocopy_send_server": true, 00:38:26.831 "enable_zerocopy_send_client": false, 00:38:26.831 "zerocopy_threshold": 0, 00:38:26.831 "tls_version": 0, 00:38:26.831 "enable_ktls": false 00:38:26.831 } 00:38:26.831 } 00:38:26.831 ] 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "subsystem": "vmd", 00:38:26.831 "config": [] 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "subsystem": "accel", 00:38:26.831 "config": [ 00:38:26.831 { 00:38:26.831 "method": "accel_set_options", 00:38:26.831 "params": { 00:38:26.831 "small_cache_size": 128, 00:38:26.831 "large_cache_size": 16, 00:38:26.831 "task_count": 2048, 00:38:26.831 "sequence_count": 2048, 00:38:26.831 "buf_count": 2048 00:38:26.831 } 00:38:26.831 } 00:38:26.831 ] 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "subsystem": "bdev", 00:38:26.831 "config": [ 00:38:26.831 { 00:38:26.831 "method": "bdev_set_options", 00:38:26.831 "params": { 00:38:26.831 "bdev_io_pool_size": 65535, 00:38:26.831 "bdev_io_cache_size": 256, 00:38:26.831 "bdev_auto_examine": true, 00:38:26.831 "iobuf_small_cache_size": 128, 00:38:26.831 "iobuf_large_cache_size": 16 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "bdev_raid_set_options", 00:38:26.831 "params": { 00:38:26.831 "process_window_size_kb": 1024, 00:38:26.831 "process_max_bandwidth_mb_sec": 0 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "bdev_iscsi_set_options", 00:38:26.831 "params": { 00:38:26.831 "timeout_sec": 30 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "bdev_nvme_set_options", 00:38:26.831 "params": { 00:38:26.831 "action_on_timeout": "none", 00:38:26.831 "timeout_us": 0, 00:38:26.831 "timeout_admin_us": 0, 00:38:26.831 "keep_alive_timeout_ms": 10000, 00:38:26.831 "arbitration_burst": 0, 00:38:26.831 "low_priority_weight": 0, 00:38:26.831 "medium_priority_weight": 0, 00:38:26.831 "high_priority_weight": 0, 00:38:26.831 "nvme_adminq_poll_period_us": 10000, 00:38:26.831 "nvme_ioq_poll_period_us": 0, 00:38:26.831 "io_queue_requests": 512, 00:38:26.831 "delay_cmd_submit": true, 00:38:26.831 "transport_retry_count": 4, 00:38:26.831 "bdev_retry_count": 3, 00:38:26.831 "transport_ack_timeout": 0, 00:38:26.831 "ctrlr_loss_timeout_sec": 0, 00:38:26.831 "reconnect_delay_sec": 0, 00:38:26.831 "fast_io_fail_timeout_sec": 0, 00:38:26.831 "disable_auto_failback": false, 00:38:26.831 "generate_uuids": false, 00:38:26.831 "transport_tos": 0, 00:38:26.831 "nvme_error_stat": false, 00:38:26.831 "rdma_srq_size": 0, 00:38:26.831 "io_path_stat": false, 00:38:26.831 "allow_accel_sequence": false, 00:38:26.831 "rdma_max_cq_size": 0, 00:38:26.831 "rdma_cm_event_timeout_ms": 0, 00:38:26.831 "dhchap_digests": [ 00:38:26.831 "sha256", 00:38:26.831 "sha384", 00:38:26.831 "sha512" 00:38:26.831 ], 00:38:26.831 "dhchap_dhgroups": [ 00:38:26.831 "null", 00:38:26.831 "ffdhe2048", 00:38:26.831 "ffdhe3072", 00:38:26.831 "ffdhe4096", 00:38:26.831 "ffdhe6144", 00:38:26.831 "ffdhe8192" 00:38:26.831 ] 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "bdev_nvme_attach_controller", 00:38:26.831 "params": { 00:38:26.831 "name": "nvme0", 00:38:26.831 "trtype": "TCP", 00:38:26.831 "adrfam": "IPv4", 00:38:26.831 "traddr": "127.0.0.1", 00:38:26.831 "trsvcid": "4420", 00:38:26.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.831 "prchk_reftag": false, 00:38:26.831 "prchk_guard": false, 00:38:26.831 "ctrlr_loss_timeout_sec": 0, 00:38:26.831 "reconnect_delay_sec": 0, 00:38:26.831 "fast_io_fail_timeout_sec": 0, 00:38:26.831 "psk": "key0", 00:38:26.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.831 "hdgst": false, 00:38:26.831 "ddgst": false, 00:38:26.831 "multipath": "multipath" 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "bdev_nvme_set_hotplug", 00:38:26.831 "params": { 00:38:26.831 "period_us": 100000, 00:38:26.831 "enable": false 00:38:26.831 } 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "method": "bdev_wait_for_examine" 00:38:26.831 } 00:38:26.831 ] 00:38:26.831 }, 00:38:26.831 { 00:38:26.831 "subsystem": "nbd", 00:38:26.831 "config": [] 00:38:26.831 } 00:38:26.831 ] 00:38:26.831 }' 00:38:26.831 11:18:46 keyring_file -- keyring/file.sh@115 -- # killprocess 725985 00:38:26.831 11:18:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 725985 ']' 00:38:26.831 11:18:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 725985 00:38:26.831 11:18:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:26.831 11:18:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:26.831 11:18:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 725985 00:38:27.118 11:18:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:27.118 11:18:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:27.118 11:18:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 725985' 00:38:27.119 killing process with pid 725985 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@971 -- # kill 725985 00:38:27.119 Received shutdown signal, test time was about 1.000000 seconds 00:38:27.119 00:38:27.119 Latency(us) 00:38:27.119 [2024-11-15T10:18:46.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.119 [2024-11-15T10:18:46.646Z] =================================================================================================================== 00:38:27.119 [2024-11-15T10:18:46.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@976 -- # wait 725985 00:38:27.119 11:18:46 keyring_file -- keyring/file.sh@118 -- # bperfpid=727802 00:38:27.119 11:18:46 keyring_file -- keyring/file.sh@120 -- # waitforlisten 727802 /var/tmp/bperf.sock 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 727802 ']' 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:27.119 11:18:46 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:27.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:27.119 11:18:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:27.119 11:18:46 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:27.119 "subsystems": [ 00:38:27.119 { 00:38:27.119 "subsystem": "keyring", 00:38:27.119 "config": [ 00:38:27.119 { 00:38:27.119 "method": "keyring_file_add_key", 00:38:27.119 "params": { 00:38:27.119 "name": "key0", 00:38:27.119 "path": "/tmp/tmp.sCe7AItOm2" 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "keyring_file_add_key", 00:38:27.119 "params": { 00:38:27.119 "name": "key1", 00:38:27.119 "path": "/tmp/tmp.pw74yhRiAp" 00:38:27.119 } 00:38:27.119 } 00:38:27.119 ] 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "subsystem": "iobuf", 00:38:27.119 "config": [ 00:38:27.119 { 00:38:27.119 "method": "iobuf_set_options", 00:38:27.119 "params": { 00:38:27.119 "small_pool_count": 8192, 00:38:27.119 "large_pool_count": 1024, 00:38:27.119 "small_bufsize": 8192, 00:38:27.119 "large_bufsize": 135168, 00:38:27.119 "enable_numa": false 00:38:27.119 } 00:38:27.119 } 00:38:27.119 ] 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "subsystem": "sock", 00:38:27.119 "config": [ 00:38:27.119 { 00:38:27.119 "method": "sock_set_default_impl", 00:38:27.119 "params": { 00:38:27.119 "impl_name": "posix" 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "sock_impl_set_options", 00:38:27.119 "params": { 00:38:27.119 "impl_name": "ssl", 00:38:27.119 "recv_buf_size": 4096, 00:38:27.119 "send_buf_size": 4096, 00:38:27.119 "enable_recv_pipe": true, 00:38:27.119 "enable_quickack": false, 00:38:27.119 "enable_placement_id": 0, 00:38:27.119 "enable_zerocopy_send_server": true, 00:38:27.119 "enable_zerocopy_send_client": false, 00:38:27.119 "zerocopy_threshold": 0, 00:38:27.119 "tls_version": 0, 00:38:27.119 "enable_ktls": false 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "sock_impl_set_options", 00:38:27.119 "params": { 00:38:27.119 "impl_name": "posix", 00:38:27.119 "recv_buf_size": 2097152, 00:38:27.119 "send_buf_size": 2097152, 00:38:27.119 "enable_recv_pipe": true, 00:38:27.119 "enable_quickack": false, 00:38:27.119 "enable_placement_id": 0, 00:38:27.119 "enable_zerocopy_send_server": true, 00:38:27.119 "enable_zerocopy_send_client": false, 00:38:27.119 "zerocopy_threshold": 0, 00:38:27.119 "tls_version": 0, 00:38:27.119 "enable_ktls": false 00:38:27.119 } 00:38:27.119 } 00:38:27.119 ] 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "subsystem": "vmd", 00:38:27.119 "config": [] 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "subsystem": "accel", 00:38:27.119 "config": [ 00:38:27.119 { 00:38:27.119 "method": "accel_set_options", 00:38:27.119 "params": { 00:38:27.119 "small_cache_size": 128, 00:38:27.119 "large_cache_size": 16, 00:38:27.119 "task_count": 2048, 00:38:27.119 "sequence_count": 2048, 00:38:27.119 "buf_count": 2048 00:38:27.119 } 00:38:27.119 } 00:38:27.119 ] 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "subsystem": "bdev", 00:38:27.119 "config": [ 00:38:27.119 { 00:38:27.119 "method": "bdev_set_options", 00:38:27.119 "params": { 00:38:27.119 "bdev_io_pool_size": 65535, 00:38:27.119 "bdev_io_cache_size": 256, 00:38:27.119 "bdev_auto_examine": true, 00:38:27.119 "iobuf_small_cache_size": 128, 00:38:27.119 "iobuf_large_cache_size": 16 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "bdev_raid_set_options", 00:38:27.119 "params": { 00:38:27.119 "process_window_size_kb": 1024, 00:38:27.119 "process_max_bandwidth_mb_sec": 0 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "bdev_iscsi_set_options", 00:38:27.119 "params": { 00:38:27.119 "timeout_sec": 30 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "bdev_nvme_set_options", 00:38:27.119 "params": { 00:38:27.119 "action_on_timeout": "none", 00:38:27.119 "timeout_us": 0, 00:38:27.119 "timeout_admin_us": 0, 00:38:27.119 "keep_alive_timeout_ms": 10000, 00:38:27.119 "arbitration_burst": 0, 00:38:27.119 "low_priority_weight": 0, 00:38:27.119 "medium_priority_weight": 0, 00:38:27.119 "high_priority_weight": 0, 00:38:27.119 "nvme_adminq_poll_period_us": 10000, 00:38:27.119 "nvme_ioq_poll_period_us": 0, 00:38:27.119 "io_queue_requests": 512, 00:38:27.119 "delay_cmd_submit": true, 00:38:27.119 "transport_retry_count": 4, 00:38:27.119 "bdev_retry_count": 3, 00:38:27.119 "transport_ack_timeout": 0, 00:38:27.119 "ctrlr_loss_timeout_sec": 0, 00:38:27.119 "reconnect_delay_sec": 0, 00:38:27.119 "fast_io_fail_timeout_sec": 0, 00:38:27.119 "disable_auto_failback": false, 00:38:27.119 "generate_uuids": false, 00:38:27.119 "transport_tos": 0, 00:38:27.119 "nvme_error_stat": false, 00:38:27.119 "rdma_srq_size": 0, 00:38:27.119 "io_path_stat": false, 00:38:27.119 "allow_accel_sequence": false, 00:38:27.119 "rdma_max_cq_size": 0, 00:38:27.119 "rdma_cm_event_timeout_ms": 0, 00:38:27.119 "dhchap_digests": [ 00:38:27.119 "sha256", 00:38:27.119 "sha384", 00:38:27.119 "sha512" 00:38:27.119 ], 00:38:27.119 "dhchap_dhgroups": [ 00:38:27.119 "null", 00:38:27.119 "ffdhe2048", 00:38:27.119 "ffdhe3072", 00:38:27.119 "ffdhe4096", 00:38:27.119 "ffdhe6144", 00:38:27.119 "ffdhe8192" 00:38:27.119 ] 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "bdev_nvme_attach_controller", 00:38:27.119 "params": { 00:38:27.119 "name": "nvme0", 00:38:27.119 "trtype": "TCP", 00:38:27.119 "adrfam": "IPv4", 00:38:27.119 "traddr": "127.0.0.1", 00:38:27.119 "trsvcid": "4420", 00:38:27.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.119 "prchk_reftag": false, 00:38:27.119 "prchk_guard": false, 00:38:27.119 "ctrlr_loss_timeout_sec": 0, 00:38:27.119 "reconnect_delay_sec": 0, 00:38:27.119 "fast_io_fail_timeout_sec": 0, 00:38:27.119 "psk": "key0", 00:38:27.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.119 "hdgst": false, 00:38:27.119 "ddgst": false, 00:38:27.119 "multipath": "multipath" 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "bdev_nvme_set_hotplug", 00:38:27.119 "params": { 00:38:27.119 "period_us": 100000, 00:38:27.119 "enable": false 00:38:27.119 } 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "method": "bdev_wait_for_examine" 00:38:27.119 } 00:38:27.119 ] 00:38:27.119 }, 00:38:27.119 { 00:38:27.119 "subsystem": "nbd", 00:38:27.119 "config": [] 00:38:27.119 } 00:38:27.119 ] 00:38:27.119 }' 00:38:27.119 [2024-11-15 11:18:46.518324] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:38:27.119 [2024-11-15 11:18:46.518380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727802 ] 00:38:27.119 [2024-11-15 11:18:46.601361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.119 [2024-11-15 11:18:46.629827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.392 [2024-11-15 11:18:46.773920] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:28.022 11:18:47 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:28.022 11:18:47 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:28.022 11:18:47 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:28.022 11:18:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.022 11:18:47 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:28.022 11:18:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:28.022 11:18:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:28.022 11:18:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:28.022 11:18:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.022 11:18:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.022 11:18:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:28.022 11:18:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.282 11:18:47 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:28.282 11:18:47 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:28.282 11:18:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.282 11:18:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:28.282 11:18:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.282 11:18:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:28.282 11:18:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.540 11:18:47 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:28.540 11:18:47 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:28.540 11:18:47 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:28.540 11:18:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:28.540 11:18:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:28.540 11:18:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:28.540 11:18:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sCe7AItOm2 /tmp/tmp.pw74yhRiAp 00:38:28.540 11:18:48 keyring_file -- keyring/file.sh@20 -- # killprocess 727802 00:38:28.540 11:18:48 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 727802 ']' 00:38:28.540 11:18:48 keyring_file -- common/autotest_common.sh@956 -- # kill -0 727802 00:38:28.540 11:18:48 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:28.540 11:18:48 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:28.540 11:18:48 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 727802 00:38:28.799 11:18:48 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:28.799 11:18:48 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:28.799 11:18:48 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 727802' 00:38:28.799 killing process with pid 727802 00:38:28.799 11:18:48 keyring_file -- common/autotest_common.sh@971 -- # kill 727802 00:38:28.799 Received shutdown signal, test time was about 1.000000 seconds 00:38:28.799 00:38:28.799 Latency(us) 00:38:28.799 [2024-11-15T10:18:48.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.799 [2024-11-15T10:18:48.326Z] =================================================================================================================== 00:38:28.799 [2024-11-15T10:18:48.326Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:28.799 11:18:48 keyring_file -- common/autotest_common.sh@976 -- # wait 727802 00:38:28.799 11:18:48 keyring_file -- keyring/file.sh@21 -- # killprocess 725965 00:38:28.799 11:18:48 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 725965 ']' 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@956 -- # kill -0 725965 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 725965 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 725965' 00:38:28.800 killing process with pid 725965 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@971 -- # kill 725965 00:38:28.800 11:18:48 keyring_file -- common/autotest_common.sh@976 -- # wait 725965 00:38:29.059 00:38:29.059 real 0m12.084s 00:38:29.059 user 0m29.071s 00:38:29.059 sys 0m2.777s 00:38:29.059 11:18:48 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:29.059 11:18:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.059 ************************************ 00:38:29.059 END TEST keyring_file 00:38:29.059 ************************************ 00:38:29.059 11:18:48 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:29.059 11:18:48 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:29.059 11:18:48 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:29.059 11:18:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:29.059 11:18:48 -- common/autotest_common.sh@10 -- # set +x 00:38:29.059 ************************************ 00:38:29.059 START TEST keyring_linux 00:38:29.059 ************************************ 00:38:29.059 11:18:48 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:29.059 Joined session keyring: 619195211 00:38:29.319 * Looking for test storage... 00:38:29.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:29.319 11:18:48 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:29.319 11:18:48 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:29.319 11:18:48 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:29.319 11:18:48 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.319 11:18:48 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:29.319 11:18:48 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.320 11:18:48 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:29.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.320 --rc genhtml_branch_coverage=1 00:38:29.320 --rc genhtml_function_coverage=1 00:38:29.320 --rc genhtml_legend=1 00:38:29.320 --rc geninfo_all_blocks=1 00:38:29.320 --rc geninfo_unexecuted_blocks=1 00:38:29.320 00:38:29.320 ' 00:38:29.320 11:18:48 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:29.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.320 --rc genhtml_branch_coverage=1 00:38:29.320 --rc genhtml_function_coverage=1 00:38:29.320 --rc genhtml_legend=1 00:38:29.320 --rc geninfo_all_blocks=1 00:38:29.320 --rc geninfo_unexecuted_blocks=1 00:38:29.320 00:38:29.320 ' 00:38:29.320 11:18:48 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:29.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.320 --rc genhtml_branch_coverage=1 00:38:29.320 --rc genhtml_function_coverage=1 00:38:29.320 --rc genhtml_legend=1 00:38:29.320 --rc geninfo_all_blocks=1 00:38:29.320 --rc geninfo_unexecuted_blocks=1 00:38:29.320 00:38:29.320 ' 00:38:29.320 11:18:48 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:29.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.320 --rc genhtml_branch_coverage=1 00:38:29.320 --rc genhtml_function_coverage=1 00:38:29.320 --rc genhtml_legend=1 00:38:29.320 --rc geninfo_all_blocks=1 00:38:29.320 --rc geninfo_unexecuted_blocks=1 00:38:29.320 00:38:29.320 ' 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:29.320 11:18:48 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.320 11:18:48 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.320 11:18:48 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.320 11:18:48 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.320 11:18:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.320 11:18:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.320 11:18:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.320 11:18:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:29.320 11:18:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:29.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:29.320 11:18:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:29.320 /tmp/:spdk-test:key0 00:38:29.320 11:18:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:29.320 11:18:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:29.321 11:18:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:29.321 11:18:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:29.321 11:18:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:29.321 11:18:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:29.321 11:18:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:29.321 11:18:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:29.321 11:18:48 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:29.321 11:18:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:29.321 11:18:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:29.321 11:18:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:29.580 11:18:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:29.580 /tmp/:spdk-test:key1 00:38:29.580 11:18:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=728256 00:38:29.580 11:18:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 728256 00:38:29.580 11:18:48 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:29.580 11:18:48 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 728256 ']' 00:38:29.580 11:18:48 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.580 11:18:48 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:29.580 11:18:48 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.580 11:18:48 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:29.580 11:18:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:29.580 [2024-11-15 11:18:48.905862] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:38:29.580 [2024-11-15 11:18:48.905930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728256 ] 00:38:29.580 [2024-11-15 11:18:48.992316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.580 [2024-11-15 11:18:49.032598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.520 11:18:49 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:30.520 11:18:49 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:30.520 11:18:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:30.520 11:18:49 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.520 11:18:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:30.521 [2024-11-15 11:18:49.715863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.521 null0 00:38:30.521 [2024-11-15 11:18:49.747917] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:30.521 [2024-11-15 11:18:49.748280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.521 11:18:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:30.521 304891016 00:38:30.521 11:18:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:30.521 1021222514 00:38:30.521 11:18:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=728581 00:38:30.521 11:18:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 728581 /var/tmp/bperf.sock 00:38:30.521 11:18:49 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 728581 ']' 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:30.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:30.521 11:18:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:30.521 [2024-11-15 11:18:49.827331] Starting SPDK v25.01-pre git sha1 8c4dec1aa / DPDK 24.03.0 initialization... 00:38:30.521 [2024-11-15 11:18:49.827378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728581 ] 00:38:30.521 [2024-11-15 11:18:49.909244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.521 [2024-11-15 11:18:49.938983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.463 11:18:50 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:31.463 11:18:50 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:31.463 11:18:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:31.463 11:18:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:31.463 11:18:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:31.463 11:18:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:31.724 11:18:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:31.724 11:18:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:31.724 [2024-11-15 11:18:51.172389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:31.724 nvme0n1 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:31.985 11:18:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:31.985 11:18:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:31.985 11:18:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.985 11:18:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:31.985 11:18:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@25 -- # sn=304891016 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 304891016 == \3\0\4\8\9\1\0\1\6 ]] 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 304891016 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:32.246 11:18:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:32.246 Running I/O for 1 seconds... 00:38:33.633 24417.00 IOPS, 95.38 MiB/s 00:38:33.633 Latency(us) 00:38:33.633 [2024-11-15T10:18:53.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.633 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:33.633 nvme0n1 : 1.01 24416.64 95.38 0.00 0.00 5226.39 1747.63 6471.68 00:38:33.633 [2024-11-15T10:18:53.160Z] =================================================================================================================== 00:38:33.633 [2024-11-15T10:18:53.160Z] Total : 24416.64 95.38 0.00 0.00 5226.39 1747.63 6471.68 00:38:33.633 { 00:38:33.633 "results": [ 00:38:33.633 { 00:38:33.633 "job": "nvme0n1", 00:38:33.633 "core_mask": "0x2", 00:38:33.633 "workload": "randread", 00:38:33.633 "status": "finished", 00:38:33.633 "queue_depth": 128, 00:38:33.633 "io_size": 4096, 00:38:33.633 "runtime": 1.005257, 00:38:33.633 "iops": 24416.641714506837, 00:38:33.633 "mibps": 95.37750669729233, 00:38:33.633 "io_failed": 0, 00:38:33.633 "io_timeout": 0, 00:38:33.633 "avg_latency_us": 5226.389198614789, 00:38:33.633 "min_latency_us": 1747.6266666666668, 00:38:33.633 "max_latency_us": 6471.68 00:38:33.633 } 00:38:33.633 ], 00:38:33.633 "core_count": 1 00:38:33.633 } 00:38:33.633 11:18:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:33.633 11:18:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:33.633 11:18:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:33.633 11:18:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:33.633 11:18:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:33.633 11:18:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:33.633 11:18:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:33.633 11:18:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.633 11:18:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:33.633 11:18:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:33.633 11:18:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:33.633 11:18:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.633 11:18:53 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:33.633 11:18:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:33.894 [2024-11-15 11:18:53.307335] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:33.894 [2024-11-15 11:18:53.308135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be9c0 (107): Transport endpoint is not connected 00:38:33.894 [2024-11-15 11:18:53.309131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be9c0 (9): Bad file descriptor 00:38:33.894 [2024-11-15 11:18:53.310133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:33.894 [2024-11-15 11:18:53.310139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:33.894 [2024-11-15 11:18:53.310145] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:33.894 [2024-11-15 11:18:53.310151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:33.894 request: 00:38:33.894 { 00:38:33.894 "name": "nvme0", 00:38:33.894 "trtype": "tcp", 00:38:33.894 "traddr": "127.0.0.1", 00:38:33.894 "adrfam": "ipv4", 00:38:33.894 "trsvcid": "4420", 00:38:33.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:33.894 "prchk_reftag": false, 00:38:33.894 "prchk_guard": false, 00:38:33.894 "hdgst": false, 00:38:33.894 "ddgst": false, 00:38:33.894 "psk": ":spdk-test:key1", 00:38:33.894 "allow_unrecognized_csi": false, 00:38:33.894 "method": "bdev_nvme_attach_controller", 00:38:33.894 "req_id": 1 00:38:33.894 } 00:38:33.894 Got JSON-RPC error response 00:38:33.894 response: 00:38:33.894 { 00:38:33.894 "code": -5, 00:38:33.894 "message": "Input/output error" 00:38:33.894 } 00:38:33.894 11:18:53 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:33.894 11:18:53 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:33.894 11:18:53 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:33.894 11:18:53 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:33.894 11:18:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:33.894 11:18:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:33.894 11:18:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:33.894 11:18:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:33.894 11:18:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@33 -- # sn=304891016 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 304891016 00:38:33.895 1 links removed 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@33 -- # sn=1021222514 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1021222514 00:38:33.895 1 links removed 00:38:33.895 11:18:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 728581 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 728581 ']' 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 728581 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 728581 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 728581' 00:38:33.895 killing process with pid 728581 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@971 -- # kill 728581 00:38:33.895 Received shutdown signal, test time was about 1.000000 seconds 00:38:33.895 00:38:33.895 Latency(us) 00:38:33.895 [2024-11-15T10:18:53.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.895 [2024-11-15T10:18:53.422Z] =================================================================================================================== 00:38:33.895 [2024-11-15T10:18:53.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.895 11:18:53 keyring_linux -- common/autotest_common.sh@976 -- # wait 728581 00:38:34.157 11:18:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 728256 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 728256 ']' 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 728256 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 728256 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 728256' 00:38:34.157 killing process with pid 728256 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@971 -- # kill 728256 00:38:34.157 11:18:53 keyring_linux -- common/autotest_common.sh@976 -- # wait 728256 00:38:34.418 00:38:34.418 real 0m5.248s 00:38:34.418 user 0m9.798s 00:38:34.418 sys 0m1.439s 00:38:34.418 11:18:53 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:34.418 11:18:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:34.418 ************************************ 00:38:34.418 END TEST keyring_linux 00:38:34.418 ************************************ 00:38:34.418 11:18:53 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:34.418 11:18:53 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:34.418 11:18:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:34.418 11:18:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:34.418 11:18:53 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:34.418 11:18:53 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:34.418 11:18:53 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:34.418 11:18:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:34.418 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:38:34.418 11:18:53 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:34.418 11:18:53 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:34.418 11:18:53 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:34.418 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:38:42.567 INFO: APP EXITING 00:38:42.567 INFO: killing all VMs 00:38:42.567 INFO: killing vhost app 00:38:42.567 INFO: EXIT DONE 00:38:45.865 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:45.865 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:45.865 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:50.073 Cleaning 00:38:50.073 Removing: /var/run/dpdk/spdk0/config 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:50.073 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:50.073 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:50.073 Removing: /var/run/dpdk/spdk1/config 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:50.073 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:50.073 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:50.073 Removing: /var/run/dpdk/spdk2/config 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:50.073 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:50.073 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:50.073 Removing: /var/run/dpdk/spdk3/config 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:50.073 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:50.073 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:50.073 Removing: /var/run/dpdk/spdk4/config 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:50.073 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:50.073 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:50.073 Removing: /dev/shm/bdev_svc_trace.1 00:38:50.073 Removing: /dev/shm/nvmf_trace.0 00:38:50.073 Removing: /dev/shm/spdk_tgt_trace.pid150072 00:38:50.073 Removing: /var/run/dpdk/spdk0 00:38:50.073 Removing: /var/run/dpdk/spdk1 00:38:50.073 Removing: /var/run/dpdk/spdk2 00:38:50.073 Removing: /var/run/dpdk/spdk3 00:38:50.073 Removing: /var/run/dpdk/spdk4 00:38:50.073 Removing: /var/run/dpdk/spdk_pid148585 00:38:50.073 Removing: /var/run/dpdk/spdk_pid150072 00:38:50.073 Removing: /var/run/dpdk/spdk_pid150923 00:38:50.073 Removing: /var/run/dpdk/spdk_pid151963 00:38:50.073 Removing: /var/run/dpdk/spdk_pid152303 00:38:50.073 Removing: /var/run/dpdk/spdk_pid153368 00:38:50.073 Removing: /var/run/dpdk/spdk_pid153553 00:38:50.073 Removing: /var/run/dpdk/spdk_pid153840 00:38:50.073 Removing: /var/run/dpdk/spdk_pid154979 00:38:50.073 Removing: /var/run/dpdk/spdk_pid155695 00:38:50.073 Removing: /var/run/dpdk/spdk_pid156049 00:38:50.073 Removing: /var/run/dpdk/spdk_pid156393 00:38:50.073 Removing: /var/run/dpdk/spdk_pid156743 00:38:50.073 Removing: /var/run/dpdk/spdk_pid157168 00:38:50.073 Removing: /var/run/dpdk/spdk_pid157525 00:38:50.073 Removing: /var/run/dpdk/spdk_pid157875 00:38:50.073 Removing: /var/run/dpdk/spdk_pid158231 00:38:50.074 Removing: /var/run/dpdk/spdk_pid159784 00:38:50.074 Removing: /var/run/dpdk/spdk_pid163198 00:38:50.074 Removing: /var/run/dpdk/spdk_pid163502 00:38:50.074 Removing: /var/run/dpdk/spdk_pid163848 00:38:50.074 Removing: /var/run/dpdk/spdk_pid164131 00:38:50.074 Removing: /var/run/dpdk/spdk_pid164507 00:38:50.074 Removing: /var/run/dpdk/spdk_pid164741 00:38:50.074 Removing: /var/run/dpdk/spdk_pid165214 00:38:50.074 Removing: /var/run/dpdk/spdk_pid165265 00:38:50.074 Removing: /var/run/dpdk/spdk_pid165593 00:38:50.074 Removing: /var/run/dpdk/spdk_pid165929 00:38:50.074 Removing: /var/run/dpdk/spdk_pid165980 00:38:50.074 Removing: /var/run/dpdk/spdk_pid166295 00:38:50.074 Removing: /var/run/dpdk/spdk_pid166751 00:38:50.074 Removing: /var/run/dpdk/spdk_pid167103 00:38:50.074 Removing: /var/run/dpdk/spdk_pid167500 00:38:50.074 Removing: /var/run/dpdk/spdk_pid172031 00:38:50.074 Removing: /var/run/dpdk/spdk_pid177356 00:38:50.074 Removing: /var/run/dpdk/spdk_pid189354 00:38:50.074 Removing: /var/run/dpdk/spdk_pid190193 00:38:50.074 Removing: /var/run/dpdk/spdk_pid195301 00:38:50.074 Removing: /var/run/dpdk/spdk_pid195795 00:38:50.074 Removing: /var/run/dpdk/spdk_pid201015 00:38:50.074 Removing: /var/run/dpdk/spdk_pid208112 00:38:50.074 Removing: /var/run/dpdk/spdk_pid211774 00:38:50.074 Removing: /var/run/dpdk/spdk_pid224299 00:38:50.074 Removing: /var/run/dpdk/spdk_pid235283 00:38:50.074 Removing: /var/run/dpdk/spdk_pid237364 00:38:50.074 Removing: /var/run/dpdk/spdk_pid238383 00:38:50.074 Removing: /var/run/dpdk/spdk_pid259092 00:38:50.074 Removing: /var/run/dpdk/spdk_pid264178 00:38:50.074 Removing: /var/run/dpdk/spdk_pid319773 00:38:50.074 Removing: /var/run/dpdk/spdk_pid326633 00:38:50.074 Removing: /var/run/dpdk/spdk_pid333782 00:38:50.074 Removing: /var/run/dpdk/spdk_pid341684 00:38:50.074 Removing: /var/run/dpdk/spdk_pid341690 00:38:50.074 Removing: /var/run/dpdk/spdk_pid342693 00:38:50.074 Removing: /var/run/dpdk/spdk_pid343695 00:38:50.074 Removing: /var/run/dpdk/spdk_pid344704 00:38:50.074 Removing: /var/run/dpdk/spdk_pid345380 00:38:50.074 Removing: /var/run/dpdk/spdk_pid345404 00:38:50.074 Removing: /var/run/dpdk/spdk_pid345715 00:38:50.074 Removing: /var/run/dpdk/spdk_pid345896 00:38:50.074 Removing: /var/run/dpdk/spdk_pid346012 00:38:50.074 Removing: /var/run/dpdk/spdk_pid347050 00:38:50.074 Removing: /var/run/dpdk/spdk_pid348055 00:38:50.074 Removing: /var/run/dpdk/spdk_pid349084 00:38:50.074 Removing: /var/run/dpdk/spdk_pid349750 00:38:50.074 Removing: /var/run/dpdk/spdk_pid349752 00:38:50.074 Removing: /var/run/dpdk/spdk_pid350088 00:38:50.074 Removing: /var/run/dpdk/spdk_pid351218 00:38:50.074 Removing: /var/run/dpdk/spdk_pid352606 00:38:50.074 Removing: /var/run/dpdk/spdk_pid362494 00:38:50.074 Removing: /var/run/dpdk/spdk_pid397060 00:38:50.074 Removing: /var/run/dpdk/spdk_pid402469 00:38:50.074 Removing: /var/run/dpdk/spdk_pid404471 00:38:50.074 Removing: /var/run/dpdk/spdk_pid407178 00:38:50.074 Removing: /var/run/dpdk/spdk_pid407510 00:38:50.074 Removing: /var/run/dpdk/spdk_pid407749 00:38:50.074 Removing: /var/run/dpdk/spdk_pid408089 00:38:50.074 Removing: /var/run/dpdk/spdk_pid408816 00:38:50.074 Removing: /var/run/dpdk/spdk_pid411158 00:38:50.074 Removing: /var/run/dpdk/spdk_pid412268 00:38:50.074 Removing: /var/run/dpdk/spdk_pid412955 00:38:50.074 Removing: /var/run/dpdk/spdk_pid415665 00:38:50.074 Removing: /var/run/dpdk/spdk_pid416362 00:38:50.074 Removing: /var/run/dpdk/spdk_pid417091 00:38:50.074 Removing: /var/run/dpdk/spdk_pid422151 00:38:50.074 Removing: /var/run/dpdk/spdk_pid428856 00:38:50.074 Removing: /var/run/dpdk/spdk_pid428857 00:38:50.074 Removing: /var/run/dpdk/spdk_pid428858 00:38:50.074 Removing: /var/run/dpdk/spdk_pid433548 00:38:50.074 Removing: /var/run/dpdk/spdk_pid443803 00:38:50.074 Removing: /var/run/dpdk/spdk_pid448630 00:38:50.074 Removing: /var/run/dpdk/spdk_pid455942 00:38:50.074 Removing: /var/run/dpdk/spdk_pid457900 00:38:50.074 Removing: /var/run/dpdk/spdk_pid459741 00:38:50.074 Removing: /var/run/dpdk/spdk_pid461271 00:38:50.074 Removing: /var/run/dpdk/spdk_pid466973 00:38:50.335 Removing: /var/run/dpdk/spdk_pid472283 00:38:50.335 Removing: /var/run/dpdk/spdk_pid477186 00:38:50.335 Removing: /var/run/dpdk/spdk_pid486424 00:38:50.335 Removing: /var/run/dpdk/spdk_pid486554 00:38:50.335 Removing: /var/run/dpdk/spdk_pid491618 00:38:50.335 Removing: /var/run/dpdk/spdk_pid491944 00:38:50.335 Removing: /var/run/dpdk/spdk_pid492144 00:38:50.335 Removing: /var/run/dpdk/spdk_pid492623 00:38:50.335 Removing: /var/run/dpdk/spdk_pid492637 00:38:50.335 Removing: /var/run/dpdk/spdk_pid498331 00:38:50.335 Removing: /var/run/dpdk/spdk_pid499007 00:38:50.335 Removing: /var/run/dpdk/spdk_pid504337 00:38:50.335 Removing: /var/run/dpdk/spdk_pid507688 00:38:50.335 Removing: /var/run/dpdk/spdk_pid514789 00:38:50.335 Removing: /var/run/dpdk/spdk_pid521380 00:38:50.335 Removing: /var/run/dpdk/spdk_pid531630 00:38:50.335 Removing: /var/run/dpdk/spdk_pid540442 00:38:50.335 Removing: /var/run/dpdk/spdk_pid540447 00:38:50.335 Removing: /var/run/dpdk/spdk_pid563430 00:38:50.335 Removing: /var/run/dpdk/spdk_pid564302 00:38:50.335 Removing: /var/run/dpdk/spdk_pid565258 00:38:50.335 Removing: /var/run/dpdk/spdk_pid566228 00:38:50.335 Removing: /var/run/dpdk/spdk_pid567287 00:38:50.335 Removing: /var/run/dpdk/spdk_pid567973 00:38:50.335 Removing: /var/run/dpdk/spdk_pid568672 00:38:50.335 Removing: /var/run/dpdk/spdk_pid569477 00:38:50.335 Removing: /var/run/dpdk/spdk_pid574713 00:38:50.335 Removing: /var/run/dpdk/spdk_pid575002 00:38:50.335 Removing: /var/run/dpdk/spdk_pid582092 00:38:50.335 Removing: /var/run/dpdk/spdk_pid582467 00:38:50.335 Removing: /var/run/dpdk/spdk_pid588929 00:38:50.335 Removing: /var/run/dpdk/spdk_pid593981 00:38:50.335 Removing: /var/run/dpdk/spdk_pid605587 00:38:50.335 Removing: /var/run/dpdk/spdk_pid606304 00:38:50.335 Removing: /var/run/dpdk/spdk_pid611474 00:38:50.335 Removing: /var/run/dpdk/spdk_pid611899 00:38:50.335 Removing: /var/run/dpdk/spdk_pid617548 00:38:50.335 Removing: /var/run/dpdk/spdk_pid624325 00:38:50.335 Removing: /var/run/dpdk/spdk_pid627398 00:38:50.335 Removing: /var/run/dpdk/spdk_pid639616 00:38:50.335 Removing: /var/run/dpdk/spdk_pid650348 00:38:50.335 Removing: /var/run/dpdk/spdk_pid652370 00:38:50.335 Removing: /var/run/dpdk/spdk_pid653528 00:38:50.335 Removing: /var/run/dpdk/spdk_pid673742 00:38:50.335 Removing: /var/run/dpdk/spdk_pid678472 00:38:50.335 Removing: /var/run/dpdk/spdk_pid681656 00:38:50.335 Removing: /var/run/dpdk/spdk_pid689431 00:38:50.335 Removing: /var/run/dpdk/spdk_pid689446 00:38:50.335 Removing: /var/run/dpdk/spdk_pid695322 00:38:50.335 Removing: /var/run/dpdk/spdk_pid697714 00:38:50.335 Removing: /var/run/dpdk/spdk_pid700048 00:38:50.335 Removing: /var/run/dpdk/spdk_pid701397 00:38:50.335 Removing: /var/run/dpdk/spdk_pid703760 00:38:50.335 Removing: /var/run/dpdk/spdk_pid705284 00:38:50.335 Removing: /var/run/dpdk/spdk_pid715224 00:38:50.335 Removing: /var/run/dpdk/spdk_pid715903 00:38:50.335 Removing: /var/run/dpdk/spdk_pid716665 00:38:50.596 Removing: /var/run/dpdk/spdk_pid719999 00:38:50.596 Removing: /var/run/dpdk/spdk_pid720451 00:38:50.596 Removing: /var/run/dpdk/spdk_pid721093 00:38:50.596 Removing: /var/run/dpdk/spdk_pid725965 00:38:50.596 Removing: /var/run/dpdk/spdk_pid725985 00:38:50.596 Removing: /var/run/dpdk/spdk_pid727802 00:38:50.596 Removing: /var/run/dpdk/spdk_pid728256 00:38:50.596 Removing: /var/run/dpdk/spdk_pid728581 00:38:50.596 Clean 00:38:50.596 11:19:09 -- common/autotest_common.sh@1451 -- # return 0 00:38:50.596 11:19:09 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:50.596 11:19:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:50.596 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:38:50.596 11:19:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:50.596 11:19:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:50.596 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:38:50.596 11:19:10 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:50.596 11:19:10 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:50.596 11:19:10 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:50.596 11:19:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:50.596 11:19:10 -- spdk/autotest.sh@394 -- # hostname 00:38:50.596 11:19:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:50.857 geninfo: WARNING: invalid characters removed from testname! 00:39:17.432 11:19:35 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:19.347 11:19:38 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:21.891 11:19:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:23.273 11:19:42 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:24.655 11:19:44 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.564 11:19:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:27.947 11:19:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:27.947 11:19:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:27.947 11:19:47 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:27.947 11:19:47 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:27.947 11:19:47 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:27.947 11:19:47 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:27.947 + [[ -n 63151 ]] 00:39:27.947 + sudo kill 63151 00:39:28.218 [Pipeline] } 00:39:28.234 [Pipeline] // stage 00:39:28.239 [Pipeline] } 00:39:28.255 [Pipeline] // timeout 00:39:28.260 [Pipeline] } 00:39:28.275 [Pipeline] // catchError 00:39:28.280 [Pipeline] } 00:39:28.295 [Pipeline] // wrap 00:39:28.301 [Pipeline] } 00:39:28.315 [Pipeline] // catchError 00:39:28.324 [Pipeline] stage 00:39:28.327 [Pipeline] { (Epilogue) 00:39:28.339 [Pipeline] catchError 00:39:28.341 [Pipeline] { 00:39:28.354 [Pipeline] echo 00:39:28.356 Cleanup processes 00:39:28.362 [Pipeline] sh 00:39:28.651 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:28.651 741580 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:28.666 [Pipeline] sh 00:39:28.955 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:28.955 ++ grep -v 'sudo pgrep' 00:39:28.955 ++ awk '{print $1}' 00:39:28.955 + sudo kill -9 00:39:28.955 + true 00:39:28.968 [Pipeline] sh 00:39:29.256 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:41.504 [Pipeline] sh 00:39:41.793 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:41.794 Artifacts sizes are good 00:39:41.812 [Pipeline] archiveArtifacts 00:39:41.819 Archiving artifacts 00:39:41.946 [Pipeline] sh 00:39:42.232 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:42.248 [Pipeline] cleanWs 00:39:42.259 [WS-CLEANUP] Deleting project workspace... 00:39:42.259 [WS-CLEANUP] Deferred wipeout is used... 00:39:42.267 [WS-CLEANUP] done 00:39:42.269 [Pipeline] } 00:39:42.288 [Pipeline] // catchError 00:39:42.300 [Pipeline] sh 00:39:42.617 + logger -p user.info -t JENKINS-CI 00:39:42.709 [Pipeline] } 00:39:42.723 [Pipeline] // stage 00:39:42.728 [Pipeline] } 00:39:42.742 [Pipeline] // node 00:39:42.748 [Pipeline] End of Pipeline 00:39:42.782 Finished: SUCCESS